AW

Aidan Whitfield🔸

Researcher @ Giving What We Can
170 karmaJoined Working (0-5 years)

Comments
8

Thanks for the comment! I first want to highlight that in our report we are specifically talking about institutional diet change interventions that reduce animal product consumption by replacing institutional (e.g., school) meals containing animal products with meals that don’t. This approach, which constitutes the majority of diet change programs that ACE MG funds, doesn’t necessarily involve convincing individuals to make conscious changes to their consumption habits.

Our understanding of a common view among the experts we consulted is that diet change interventions are generally not competitive with promising welfare asks in terms of cost-effectiveness, but that some of the most promising institutional diet change interventions plausibly could be. For example, I think some of our experts would have considered the grant ACE MG made to the Plant-Based Universities campaign worth funding. Reasons for this include:

  • The organisers have a good track record
  • The ask is for a full transition to plant-based catering, which reduces small animal replacement concerns
  • The model involves training students to campaign, meaning the campaign can reach more universities than the organisation could by going school-to-school themselves

As noted in the report, not all experts agreed that the institutional diet change interventions were on average competitive with the welfare interventions ACE MG funded. However, as you noted, this probably has a fairly limited impact on how cost-effective ACE MG is on the margin, not least because these grants made up a small fraction of ACE MG’s 2024 funding.

Hi Vasco, thanks for the comment! I should clarify that we are saying that we expect marginal cost-effectiveness of impact-focused evaluators to change more slowly than marginal cost-effectiveness for charities. All else equal, we think size is plausibly a useful heuristic. However, because we are looking at the margin, both the program itself and its funding situation can change, and as THL hasn’t been evaluated for how it allocates funding on the margin or starts new programs, but just on the quality of its marginal programs at the time of evaluation, there is a less robust signal there than there is for EA AWF, which we did evaluate on the basis of how it allocates funding on the margin. I hope that makes sense!

Thanks for the comment — we appreciate the suggestions! 

With respect to your first suggestion, I want to clarify that our goal with this project is to identify evaluators that recommend among the most cost-effective opportunities in each cause area according to a sufficiently plausible worldview. This means among our recommendations we don’t have a view about which is more cost-effective, and we don’t try to rank the evaluators that we don’t choose to rely on. That said, I can think of two resources that might somewhat address your suggestion:

  1. This section on our 2024 evaluating evaluators page explains which programs have changed status following our 2024 evaluations and why
  2. In the other supported programs section of our donation platform, we roughly order the programs based on our preliminary impression of which might be most interesting to impact-focused donors in each cause area. To do this we take into account factors like if we’ve previously recommended them and if they are currently recommended by an impact-focused evaluator.

With respect to your second suggestion, while we don’t include a checklist as such, we try to include the major areas for improvement in the conclusion section of each report. In future we might consider organising these more clearly and making them more prominent.

Thanks for the comment! We didn’t look deeply into the SADs framework as part of our evaluation, as we didn’t think this would have been likely to change our final decision. It is possible we will look into this more in future evaluations. I currently expect use of this framework to be substantially preferable to a status quo where there is not a set of conceptually meaningful units for comparing animal welfare interventions.

On ACE’s additional funds discounts, our understanding is that the 0% discount for low uncertainty is not a typo. ACE lists their uncertainty categories and the corresponding discounts under Criterion 2 on their evaluation criteria page.

Thanks so much for your comment, we appreciate the positive feedback! We plan to open applications for our supported programs status in Q1 2025 and intend to invite ACE-recommended charities to apply. We will make decisions about which charities we onboard based on our inclusion criteria, which we’ll update early next year but expect to not change dramatically.

Thanks for the comment! While we did find issues that we think imply Founders Pledge’s BOTECs don’t convincingly show that the FP GHDF’s grants surpass 10x GiveDirectly in expectation in terms of marginal cost-effectiveness, we don’t think we can justifiably conclude from this that we are confident these grants don’t pass this bar. As mentioned in the report, this is partly because:

  • FP may have been sufficiently conservative in other inputs to compensate for the problems we identified
  • There are additional direct benefits (for example, morbidity benefits) to the grants that FP acknowledged but decided not to model 
  • There may be additional large positive externalities from funding these early-stage and more neglected opportunities

Rosie’s comment also covers some other considerations that bear on this and provides useful context that is relevant here.

Thanks for the positive feedback! We are actively considering the future of the GWWC research team, including whether we invest additional resources in future evaluating evaluators projects. To my knowledge, we have not considered fundraising specifically for the evaluating evaluators project. This might be an option we consider at some point, but I think we are probably unlikely to do so in the near future. 

Thank you for your comment! We’ve really appreciated the open and thoughtful way ACE engaged with us throughout these evaluations.

We are excited to be adding Movement Grants to our list of recommended programs this year, and we think the improvements we observed since our last evaluation are a testament to ACE’s constructive approach to engaging with feedback. We are also excited to continue highlighting opportunities like the Recommended Charity Fund and several of ACE’s Recommendations as supported programs on our platform.