Why are these
ads here?

Recent Articles

CN Articles

Charity Navigator After Year One

What Have We Learned?

by Trent Stamp
Executive Director

April 15, 2003

 
 

We launched our non-profit organization, Charity Navigator, one year ago, on April 15, 2002. At the time, our intentions were very simple. We wanted to introduce unbiased data-analysis and evaluation to the charitable sector. We recognized that billions of dollars were flowing through charities, and that in our opinion, most of it was despite the kind of data-driven, consumer-focused ratings that are pervasive in every other sector of the American economy. So, in the interest of consumer advocacy, with a secondary goal of promoting transparency and accountability in a sector that desperately needed it to restore consumer confidence amid much cynicism and skepticism, we unveiled Charity Navigator for America's charitable givers.

In the year since we first provided our free service, we have come to be seen as some sort of experts, commenting on the charitable sector in places as disparate as The Factor with Bill O'Reilly, The New York Times, The Advocate, or Bird Watcher's Monthly. So, after one year of running what is now America's largest evaluator of charities, I thought I'd take just a few minutes to share with you a few things we've learned in the last 12 months.

I. Our Service Was Long Overdue: In December of last year, when 50% of all charitable giving for the year was done, over 60,000 different people visited our web site to research charities. This unprecedented traffic developed without us running any advertising. We simply created a service that we thought people who wanted to give to charity, but were unsure about how to do it, or who to give to, might find helpful. And the response was overwhelming.

No one I know buys a car today without checking with Consumer Reports or a similar service. Very few people buy a stock without consulting some sort of independent expert. Most people don't even see a movie, or buy a book, or go to a restaurant without some sort of unbiased reviewer making a recommendation. And yet, for the last 50 years, as charities have taken over our most vital services in this country, we have generally accepted the word of the charity as gospel, and written our checks without any sort of research or validation.

Clearly, the days when the charitable sector was impermeable to outside independent analysis are gone. Donors have demonstrated through their patronage of our site that they seek and welcome this type of comparative evaluation.

II. Charity Efficiency is Relevant: Researchers at IUPUI's School of Philanthropy have found that individual giving to charity is relatively inelastic, i.e. there is only so much money each year that flows into charities. Charitable giving from individuals does not fluctuate wildly with the stock market or ideologies of the times. The types of giving and the types of recipients may vary in any particular year, but the giving levels are relatively constant. If those dollars go to organizations that spend an inordinate amount of those dollars on administration, fundraising, debt management, or anything else not program-related, that is money for which we will never compensate.

The donor gives $100 to a charity. He won't give any more this year. If he gives it to an organization that only spends 50% of it on what he intended it for, as opposed to an organization that would have spent 80% of it on their programs, that is $30 that the giver thought he gave to "charity" (for which he received the standard IRS deduction for giving to charity), that didn't actually get to the intended charitable recipients. With literally thousands of charities that often serve remarkably similar populations, we have an obligation to seek out those organizations that are not only effective, but efficient. At a seminar at Harvard University last fall, a leading scholar in the non-profit field insisted to me, and everyone else at this public forum, that Charity Navigator was not "relevant because donors don't care about efficiencies." To say this is to say that donors are stupid. We believe the converse to be true.

III. The 990 Is Not The Problem: We use the IRS Informational Return (990) as the basis of our evaluations. We've laid out our reasons for this many times, but basically it is the only document that all charities are required to fill out and the instructions for filling it out are straight-forward, ensuring that our data is uniform, comparable, and allows for standardized growth measurements.

Some have argued that the audited annual report is a much better document for analysis. After looking at over 10,000 Forms 990 (which no one else has done), as well as thousands of annual reports, we find this simply to be false. Some 990s are indeed flawed. An equal number of annual reports are also inaccurate. But because the annual reports allow for individual interpretations and reporting standards, it is impossible to identify these inaccuracies with any level of confidence, unlike the 990s, which because of their uniformity, allow for our experts to identify inconsistencies or errors quickly.

More importantly, if you only evaluate those organizations who compile annualized annual reports, you are imposing your belief that all organizations compile an annual report. While you and I may believe this to be true, the IRS has not made this claim, nor have the state Attorneys General or the Consumer Advocacy Departments. These government agencies have recognized that without reporting standards and institutionalized consistencies, such a decree gets us no closer to objective accountability. All it really does is impose an undue burden on small organizations to create a document that their donors have not demanded. If donors to an organization have not demanded an annual report, Charity Navigator does not feel that it is our place to do so. Doing so, despite overwhelming evidence that the new document would not necessarily be more reliable but may be actually less, would be hubris solely for the sake of hubris.

IV. Honesty Will Get You Attacked: When we decided to create our non-profit service, we recognized that our data needed to be completely transparent and objective. We also recognized that for our service to be relevant, we would need to design a system which allowed us to compare apples to apples, and not to oranges, and therefore built a ratings matrix that compared food banks to food banks, universities to universities, museums to museums. And we knew that to be helpful, we had to be large. We were of little use if we only evaluated a handful of organizations. So, we created a ratings service that was 100% data-driven and objective, which allowed us to compare like-minded (and similarly-performing) organizations, and that, could include as many organizations as possible, allowing for our own limitations as a non-profit. We announced that our ratings were based absolutely on an organization's finances, that we were only interested in an organization's efficiency and capacity, and would leave the efficacy of one's programs to someone else.

We were subsequently attacked and criticized by our competitors, peers, and some charities, who all claimed that by ignoring an organization's programs, we were only presenting a piece of the picture. And they were right. We were only presenting one piece of the picture, something we acknowledged from Day One. But we felt that it was a pretty big piece of the picture, a piece that was not being presented by the charities themselves in any sort of meaningful and comparative way.

And then, after bearing this criticism for a while, we realized that no one else was evaluating an organization's programs either, including those who were criticizing us. None of the other charity evaluators do it. They may claim they do, but if you can take a look at their methodology (if they allow it), you'll see this isn't true. The foundations are rarely doing it, but are simply relying on the charities they fund to tell them how successful they have been. And most insidiously and irresponsibly, we found that the charities that were criticizing us for not comparing their efficacy to their peers' efficacy in some sort of meaningful way weren't even doing this sort of program evaluation of themselves! They just assumed the best of themselves, and expected donors to do the same, without any program evaluation to substantiate their internal beliefs and claims.

Our only crime in this area was that we were naively honest. We were trying to evaluate an organization's financial health, as compared to their peers, and allow prospective donors to ask an organization why they needed so much more institutionally to pay their executives' salaries, for instance. We weren't trying to analyze the institution's program effectiveness, because we knew that it couldn't be done on a national level, without a massive incurring of cost. How do you as an outsider measure the effectiveness of an advocacy organization like the Anti-Defamation League, without subjective and complete collaboration from them, and then if you figure out how to do it, how do you then compare them to the direct-service Cleveland Food Bank, or the research-oriented Dana-Farber Cancer Institute? You can't, not yet, anyway. And we made the mistake of acknowledging this, opening the door for hypocrites to skewer us for it, while they take no steps of their own to measure others' effectiveness. We were honest in our own limitations. But our limitations are not solely our own, and they are far less than fatal.

What we do is compare organizations' finances with their peers, who are competing with them for the dollars of those donors who believe that their cause is one of merit. We evaluate, at a relatively sophisticated level, whether they're operating in a responsible and efficient manner, check to see if they're growing appropriately, and measure whether they'll be around in the years to come to continue to fight their good fight. That's what we do. We could claim, as others do, that we do more than that, but that would be intellectually dishonest. We think our constituents want, and deserve, honest, unbiased information from a source they can trust. And that we will deliver.

V. A Lot of Work Still Exists: Charity Navigator currently evaluates nearly 2500 organizations, or at least five times as many as anyone else who has ever tried to do this. And yet, we receive countless requests every day from donors seeking data on an organization we don't currently evaluate or from well-intentioned charities who would like to be included, in an effort to demonstrate their transparency and accountability to their prospective donors. We are, quite simply, a work in progress. We recognize that it is not enough to merely be the largest, or even the most helpful, source available. We are only relevant to our users if, when any one of them goes to our site to look up a charity in which they're interested, that particular charity is listed, and evaluated. Accordingly, we know that to fulfill our commitment to America's givers, to be the source they can turn to for their charitable giving advice, we need to get larger, and add even more charities.

There's a lot of work to do. I should get going.

I thank you for your time, for your use of Charity Navigator, and for your commitment to informed charitable giving. It's been a great year. In the year to come, it is my hope that we can work together to create a marketplace where great charities are rewarded, and poor charities are forced to improve, or surrender their claims to charitable dollars that could be spent more wisely somewhere else. When we do this, we ensure that the true winners are the ones who deserve it--those intended to receive your charitable dollars.

   
AWARDS TIME Forbes STRATEGIC PARTNERS   Managed Cloud Hosting from INetU Donor Perfect 3Scale
Help & Support