Hide table of contents

Edit: GiveWell's Response at the Bottom

A past event has shown how reputation damage to one EA entity can affect the entire movement's credibility and therefore funding and influence. While GiveWell's evaluation process is thorough, it largely relies on charity-provided data. I propose they consider implementing independent verification methods.

Reliance on coverage surveys

GiveWell performs no independent verification of their charity's claims to draw their conclusions. However they do do a downward adjustment.

This feels lacking.

Getting numbers that are closer to reality not only increases the cost effectiveness calculation accuracy but also reduces the risk of adding a new entry to their mistake list.

Suggestions to shore it up

GiveWell is an important cornerstone of the movement and thus preserving its reputation should be further explored to see if more could be done.


GiveWell's Response

GiveWell would also like to improve in this area. Some work has already been done. However, this year our cross-cutting research subteam, which focuses on high-priority questions affecting all intervention areas, plans to improve our coverage survey estimates. Examples of work we’ve done to address our concerns with coverage survey estimates include:

  • Our research team is working on a project to identify, connect with, and potentially test different external evaluator organizations to make it easier for grantmakers to identify well-suited evaluation partners for the questions they’re trying to answer.
  • We recently approved a grant to Busara Center to conduct a qualitative survey of actors in Helen Keller Intl’s vitamin A supplementation delivery, including caregivers of children who receive vitamin A supplementation.
  • We made a grant to IDinsight to review and provide feedback on Against Malaria Foundation’s monitoring process for their 2025 campaign in Democratic Republic of the Congo.
  • For New Incentives, we mainly rely on the randomized controlled trial of their work to estimate coverage, which was run by an independent evaluator, IDinsight. Only recently have we begun to give weight to their coverage surveys.
  • We funded a Tufts University study to compare findings to Evidence Action’s internal monitoring and evaluation for their Dispensers for Safe Water program, which caused us to update our coverage data and consider funding an external coverage survey.
  • Our grant to CHAI to strengthen and support a community-based tuberculosis household contact management program includes IDinsight to evaluate the program through a large-scale cluster randomized control trial (cRCT) and process evaluation.

58

2
0
2

Reactions

2
0
2
Comments12
Sorted by Click to highlight new comments since:

Independent verification seems good, but mainly for object-level epistemic reasons rather than reputational. 

Transparency is only a means for reputation. The world is built on trust and faith in the systems and EA is no different.

I believe more people would be alarmed by the lack of independent vetting than the nominal cost effective numbers being inaccurate themself. It feels like there are perverse incentives at play.

Epistemologically speaking, it's just not a good idea to have opinions relying on the conclusions of a single organization, no matter how trustworthy it is. 

EA in general does not have very strong mechanisms for incentivising fact-checking: the use of independent evaluators seems like a good idea. 

Just wanted to note that this take relies on "GiveWell performs no independent verification of their charity's claims to draw their conclusions" being true, and it struck me as surprising (and hence doubtful). Does anyone have a good citation for this / opinions on it? 

GiveWell's Carley Moor from their philantrophic outreach team contacted me and we had a conversation a few weeks ago which prompted this post.

Among other things I asked about independent verification there. The short answer seems to be no independent verification with the caveat that they adjust. The spreadsheets I linked were sourced from her.

They do fund at least one meta charity that help improve monitoring & evaluation at these charities.

I asked her to either post her response email here or let me post it verbatim and am awaiting to hear from her next week. Being cautious lest I misrepresent them.

Thanks for the details! Keen to see their response if Carley OKs it. 

I hope so! Apparently the concept was received well with the team.

I love these suggestions and have wondered about this for some time. Independent surveyors is a really good idea - not only for impact data but also for progamattic data. Although findng truly independednt surveyors is harde than you might think in relatively small NGO econsystems.

I don't really understand what you mean by "Creating funding opportunities for third-party monitoring organisations" can you explain what you mean?

I also would have liked to see a couple more paragraphs explaining the background and reasoning, although good on you for putting up the draft rather than leaving it just in the word doc :D.

I read it as "providing enough funding for independent auditors of charities to exist and be financially sustainable"

This is what I meant.

Appreciate the feedback, although can you elaborate on what you mean by impact data and progamattic data?

I agree I could have made a better case on the reputation part.

It is news to me that this isn't already the case, seems like an obvious positive, both for the potentially higher ratings (not being down-adjusted) and as instructive for the organisations themselves.

Curated and popular this week
Relevant opportunities