I have a PhD in finance and am the strategist at Affinity Impact, the impact initiative of a Singapore-based family office that makes both grants and impact investments.
Hi, James! When it comes to assessing bednets vs therapy or more generally, saving a life vs happiness improvements for people, the meat eater problem looms large for me. This immediately complicates the trade-off, but I don't think dismissing it is justifiable on most moral theories given our current understanding that farm animals are likely conscious, feel pain, and thus deserve moral consideration. Once we include this second-order consideration, it's hard to know the magnitude of the impact given animal consumption, income, economic growth, wild animal, etc. effects. You've done a lot of work evaluating mental health vs life-saving interventions (thanks for that!), how does including animals impact your thinking? Do you think it's better that we should just ignore it (like GiveWell does)?
I think this goes back to Joey's case for a more pluralistic perspective, but I take your point that in some cases, we may be doing too much of that. It's just hard to know how wide a range of arguments to include when assessing this balance...
Thanks, Vasco, for doing this analysis! Here are some of my learnings:
Thanks, Ben, for writing this up! I very much enjoyed reading your intuition.
I was a bit confused in a few places with your reasoning (but to be fair, I didn't read your article super carefully).
Thanks, Ben! I enjoyed reading your write-up and appreciate your thought experiment.
What concerns me is that I suspect people rarely get deeply interested in the moral weight of animals unless they come in with an unusually high initial intuitive view.
This criticism seems unfair to me:
Thanks so much for such a thorough and great summary of all the various considerations! This will be my go-to source now for a topic that I've been thinking about and wrestling with for many years.
I wanted to add a consideration that I don't think you explicitly discussed. Most investment decisions done by philanthropists (including the optimal equity/bond split) are outsourced to someone else (financial intermediary, advisor, or board). These advisors face career risk (i.e. being fired) when making such decisions. If the advisor recommends something that deviates too far from consensus practice, they have to worry about how they can justify this decision if things go sour. If you are recommending 100% equities and the market tanks (like it did last year), it's hard to say 'But that's what the theory says,' when the reflective response by the principal is that you are a bad advisor because you don't understand risk. Many advisors have been fired this way, and no one wants to be in that position. This means tilting toward consensus is likely the rational thing to recommend as financial advisors. There are real principal-agent issues at play, and this is something acutely felt by practitioners even if it's less discussed among academics.
I suspect the EA community is subject to this dynamic too. It's rarely the asset owners themselves who decide the equity mix. Asset allocation decisions are recommended by OpenPhil, Effective Giving, EA financial advisors, etc. to their principals, and it's dangerous to recommend anything that deviates too far from practice. This is especially so when EA's philanthropy advice is already so unconventional and is arguably the more important battle to fight. It can be impact-optimal over the long term to tilt toward asset allocation consensus when not doing so risks losing the chance to make future grant recommendations. The ability to survive as an advisor and continue to recommend over many periods can matter more than a slightly more optimal equity tilt in the short term.
Keynes comes to mind: “Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally.”
Thanks for posting this, Jonathan! I was going to share it on the EA Forum too but just haven't gotten around to it.
I think GIF's impact methodology is not comparable to GiveWell's. My (limited) understanding is that their Practical Impact approach is quite similar to USAID's Development Innovation Ventures' impact methodology. DIV's approach was co-authored by Michael Kremer so it has solid academic credentials. But importantly, the method takes credit for the funded NGO's impact over the next 10 years, without sharing that impact with subsequent funders. The idea is that the innovation would fail without their support so they can claim all future impact if the NGO survives (the total sum of counterfactual impact need not add to 100%). This is not what GiveWell does. GiveWell takes credit for the long-term impact of the beneficiaries it helps but not for the NGOs themselves. So this is comparing apples to oranges. It's true that GiveWell Top Charities are much more likely to survive without GiveWell's help but this leads to my next point.
GiveWell also provides innovation grants through their All Grants Fund (formerly called Incubation Grants). They've been funding a range of interventions that aren't Top Charities and in many cases, are very early, with GiveWell support being critical to the NGO's survival. According to GiveWell's All Grants Fund page, "As of July 2022, we expect to direct about three-quarters of our grants to top charity programs and one-quarter to other programs, so there's a high likelihood that donations to the All Grants Fund will support a top charity grant." This suggests that in GiveWell's own calculus, innovation grants as a whole cannot be overwhelmingly better than Top Charities. Otherwise, Top Charities wouldn't account for the majority of the fund.
When thinking about counterfactual impact, the credit one gets for funding innovation should depend on the type of future donors the NGO ends up attracting. If these future donors would have given with low cost-effectiveness otherwise (or not at all), then you deserve much credit. But if they would have given to equally (or even more) cost-effective projects, then you deserve zero (or even negative) credit. So if GIF is funding NGOs that draw money from outside EA (whereas GiveWell isn't), it's plausible their innovations have more impact and thus are more 'cost-effective'. But we are talking about leverage now, so again, I don't think the methodologies are directly comparable.
Finally, I do think GIF should be more transparent about their impact calculations when making such a claim. It would very much benefit other donors and the broader ecosystem if they can make public their 3x calculation (just share the spreadsheet please!). Without such transparency, we should be skeptical and not take their claim too seriously. Extraordinary claims require extraordinary evidence.
Thanks for your response, Joel!
Stepping back, CEARCH's goal is to identify cause areas that have been missed by EA. But to be successful, you need to compare apples with apples. If you're benchmarking everything to GiveWell Top Charities, readers expect your methodology to be broadly consistent with GiveWell's and their conservative approach (and for other cause areas, consistent with best-practice EA approaches). The cause areas that are standing out for CEARCH should be because they are actually more cost-effective, not because you're using a more lax measuring method.
Coming back to the soda tax intervention, CEARCH's finding that it's 1000x GiveWell Top Charities raised a red flag for me so it seemed that you must somehow be measuring things differently. LEEP seems comparable since they also work to pass laws that limit a bad thing (lead paint), but they're at most ~10x GiveWell Top Charities. So where's the additional 100x coming from? I was skeptical that soda taxes would have greater scale, tractability, or neglectedness since LEEP already scores insanely high on each of these dimensions.
So I hope CEARCH can ensure cost-effectiveness comparability and if you're picking up giant differences w/ existing EA interventions, you should be able to explain the main drivers of these differences (and it shouldn't be because you're using a different yardstick). Thanks!
Hi Joel, I skimmed your report really quickly (sorry) but suspect that you did not account for soda taxes being eventually passed anyway. So the modeled impact of any intervention shouldn't be going to 2100 or beyond but out only a few years (I'd think <10 years) when soda taxes would eventually be passed without any active intervention. You are trying to measure the impact of a counterfactual donated dollar in the presence of all the forces already at play that are pushing for soda taxes (how some countries already have them). This makes for a more plausible model, and I believe is how LEEP or OpenPhil model policy intervention cost-effectiveness (I could be wrong though).
Thanks so much for this very helpful post!
I'm a bit confused about your framing of the takeaway. You state that "reducing meat consumption is an unsolved problem" and that "we conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing meat and animal product consumption." However, the overall pooled effects across the 41 studies show statistical significance w/ a p-value of <1%. Yes, the effect size is small (0.07 SMD) but shouldn't we conclude from the significance that these interventions do indeed work?
Having a small effect or even a statistically insignificant one isn't something EAs necessarily care about (e.g. most longtermism interventions don't have much of an evidence base). It's whether we can have an expected positive effect that's sufficiently cheap to achieve. In Ariel's comment, you point to a study that concludes its interventions are highly cost-effective at ~$14/ton of CO2eq averted. That's incredible given many offsets cost ~$100/ton or more. So it doesn't matter if the effect is 'small', only that it's cost-effective.
Can you help EA donors take the necessary next step? It won't be straightforward and will require additional cost and impact assumptions, but it'll be super useful if you can estimate the expected cost-effectiveness of different diet-change interventions (in terms of suffering alleviated).
Finally, in addition to separating out red meat from all animal product interventions, I suspect it'll be just as useful to separate out vegetarian from vegan interventions. It should be much more difficult to achieve persistent effects when you're asking for a lot more sacrifice. Perhaps we can get additional insights by making this distinction?