V

VettedCauses

80 karmaJoined

Posts
1

Sorted by New

Comments
21

Hi,

Thank you for taking the time to read our review and for responding to each of our points. We really appreciate ACE’s willingness to engage with feedback and acknowledge problems.

Regarding your clarifications related to the calculation of Normalized Achievement Scores:

‘Charities can have 1,000,000 times the impact at the exact same price, and their Normalized Achievement Scores and Cost-Effectiveness Score can remain the same.’

  • This isn’t the case, but we didn’t publish the full details about our method for assessing the impact of books, podcasts, and other interventions, so we see why this wasn’t clear. Essentially for each intervention in our Menu of Interventions we identified proxies for its likely impact. For books, we had intended to include sales/views as well as a rating of the overall audience response/reviews. In practice, this wasn’t possible for various reasons given the wide variation in types of publication (e.g., some publications had not been released yet, or had been provided directly to the audience with no feedback collected), so we had to factor in such considerations on a more case-by-case basis in our Recommendations Decisions discussions.

We are glad to hear that ACE was accounting for these factors behind the scenes.

‘Charities can increase their Normalized Achievement Scores and Cost-Effectiveness Score by breaking down actions into smaller steps, even if the overall results remain unchanged.’

  • This actually isn’t the case (sorry if this wasn’t clear). Breaking down an achievement into smaller steps would drive up the ‘Achievement quantity’ score, but would be offset by lower ‘Achievement quality’ scores for each achievement. However, there was still a risk of this introducing inconsistency into the model, which is another reason why we updated our methods this year.

Thank you for clarifying this. From the publicly available rubrics for calculating Achievement Quality Scores, it did not seem like breaking down an achievement into smaller steps would decrease the Achievement Quality Score at all. However, given that ACE was accounting for factors outside of the publicly available rubrics, it makes sense that this decrease could occur.

That being said, we believe it is important for ACE to fully disclose its methodology to the public and avoid relying on hidden evaluation criteria. This transparency would allow people from outside the organization to understand how ACE's charity evaluation metrics (i.e. Normalized Achievement Scores) were calculated. 

We might also reach out to you via email in the coming weeks as we go through retrospectives and plan for next year’s evaluation. Because of the complexity of the animal welfare cause area, the many uncertainties and knowledge gaps in the field of charity evaluation, and the urgency and scope of suffering, we embrace productive collaboration.  

We appreciate your openness to collaboration. Feel free to reach out to us at any time at hello@vettedcauses.com 

Wait, those are related to each other though - if we haven't seen the full impact of their previous actions, we haven't yet seen their historical cost-effectiveness in full!

No, they are not. Historical cost-effectiveness refers to past actions and outcomes—what has already occurred.

All of LIC's legal actions have already been either dismissed or rejected. What are you suggesting we need to wait for before we can analyze LIC's historical cost-effectiveness in full? 

You are conflating the issue of past cost-effectiveness with future potential.

Also, you cite these as reasons the project should be dismissed in your post - you have a section literally called "Legal Impact for Chickens Did Not Achieve Any Favorable Legal Outcomes, Yet ACE Rated Them a Top Charity" which reads to me that you believe that it is bad they were rated a Top Charity, and make these same arguments (and no others) in the section, suggesting that you think this evidence means they should be dismissed.

Did I claim that I don't think LIC "should be dismissed"? 

Responding because this is inaccurate)

I don't know what you're saying is inaccurate. My reply addressed every single word from the section you claimed I didn't provide evidence for. 

My claim in the comment above was that you haven't provided any evidence that:

  • 5 / 11 (or more) ACE top charities are not effective

We never made this claim. 

  • That animals are suffering as a result of ACE recommendations

I’ll ask again. Our review details how ACE is rewarding charities for inefficiency (and punishing them for efficiency), and how LIC was rewarded for their inefficiency with the designation "Top 11 Animal Charities to Donate to in 2024." Our review also details how ACE's recommendations direct the flow of millions of dollars. Are you asking for evidence that directing millions of dollars toward ineffective animal charities, rather than effective ones, leads to animal suffering?

Appreciate you going through everything with me!

No problem. Thank you for your replies! 

I don't find that evidence particularly compelling on its own, no. Lots of projects cost more than 1M or take more than a few years to have success. I don't see why those things would be cause to dismiss a project out of hand.

The question I asked was: "You don't find these facts particularly compelling evidence that LIC is not historically cost-effective?"

The question was not about whether these facts are compelling evidence that LIC won't be successful in the future, or if the project should be dismissed. 

Note: this reply addresses everything Abraham claims we did not provide evidence for. 

“ACE's poor evaluation process leads to ineffective charities receiving recommendations”

Our review covered how under ACE’s evaluation process:

  1. Charities can receive a worse Cost-Effectiveness Score by spending less money to achieve the exact same results.
  2. Charities can have 1,000,000 times the impact at the exact same price, and their Cost-Effectiveness Score can remain the same.
  3. The most important factor in determining the impact of an intervention is decided before the intervention even begins. 

This is clear evidence that ACE uses a poor evaluation process. Is the fact that ACE’s evaluation process rewards inefficiency, and punishes efficiency, “no evidence” for ACE recommending ineffective charities?

If you’d like me to get even more specific, let’s look at Problem 1 of our review:

We go on to detail how if LIC had spent less than $2,000 on the lawsuit (saving over $200,000) and achieved the exact same outcome, ACE would have assigned LIC a Cost-Effectiveness Score of 1.8. The lowest Cost-Effectiveness Score ACE assigned to any charity in 2023 was 3.3. This means if LIC had spent less than $2,000 on the lawsuit, LIC's Cost-Effectiveness Score would have been significantly worse than any charity ACE evaluated in 2023. 

Instead, LIC spent over $200,000 on the lawsuit, and LIC rewarded them for this inefficiency by giving them a Cost-Effectiveness Score of 3.7, and deeming LIC a top 11 animal charity.

As we noted in our review, these Cost-Effectiveness Scores are defined by ACE as “how cost effective we think the charity has been”. LIC achieved no favorable legal outcomes despite receiving over a million dollars in funding. As we also noted in our review, every lawsuit LIC filed was dismissed for failing to state a valid legal claim.

If I provided evidence that a Law Firm Rating Organization rewards law firms for losing lawsuits and wasting money, and punishes law firms for winning lawsuits and saving money, would this be no evidence that the Law Firm Rating Organization is recommending ineffective law firms?

ACE's poor evaluation process leads to ineffective charities receiving recommendations, and many animals are suffering as a result.

Our review details how ACE’s recommendations direct the flow of millions of dollars. Are you asking for evidence that directing millions of dollars toward ineffective animal charities, rather than effective ones, leads to animal suffering?

we have reviewed 5 of ACE's "Top 11 Animal Charities to Donate to in 2024" and only one of them (Shrimp Welfare Project) appears to be an effective charity for helping animals.

Imagine a film critic watches 5 of the 11 films that received a 'Best Films' award and writes, “Of the five films I’ve seen, only one appears to deserve the award. I plan to release my reviews of the films shortly.” Does this statement by the film critic require evidence? 

But I don’t find this level of evidence particularly compelling on its own.

You don't find these facts particularly compelling evidence that LIC is not historically cost-effective?

  1. LIC’s most cost-effective intervention was one in which they spent over $200,000, and the lawsuit was dismissed for failing to state a valid legal claim.
  2. LIC received over a million dollars in funding prior to being reviewed
  3. LIC existed for multiple years prior to being reviewed
  4. LIC failed to secure any favorable legal outcomes, or file any lawsuit that stated a valid legal claim?

What would be compelling evidence for LIC not being historically cost-effective? 

I think I feel confused about the example you’re giving because it isn’t about hypothetical cost-effectiveness, it’s about historic cost-effectiveness, where what matters are the counterfactuals.

ACE does 2 separate analyses for past cost-effectiveness, and room for future funding. For example, those two sections in ACE's review of LIC are:

  • Cost Effectiveness: How much has Legal Impact for Chickens achieved through their programs?
  • Room For More Funding: How much additional money can Legal Impact for Chickens effectively use in the next two years?

Our review focuses on ACE's Cost-Effectiveness analysis, not on their Room For More Funding analysis. In the future, we may evaluate ACE's Room For More Funding Analysis, but that is not what our review focused on. 

However, I would like to pose a question to you: Given the ACE often gives charities a worse historic cost-effectiveness rating for spending less money to achieve the exact same outcomes (see Problem 1), how confident do you feel in ACE's ability to analyze future cost-effectiveness (which is inherently more difficult to analyze)?

Thank you for your response!

From this post, it seems like you’re trying to calculate historic cost-effectiveness and rate charities exclusively on that (since you haven’t published an evaluation of an animal charity yet I could be wrong here though)

This is not what we are trying to do. We simply critiqued the way that ACE calculated historic cost-effectiveness, and how ACE gave Legal Impact for Chickens a relatively high historic cost-effectiveness rating despite have no historic success. 

My understanding of what ACE is trying to do with its evaluations as a whole is identify where marginal dollars might be most useful for animal advocacy, and move money from less effective opportunities to those. 

ACE does 2 separate analyses for past cost-effectiveness, and room for future funding. For example, those two sections in ACE's review of LIC are:

  • Cost Effectiveness: How much has Legal Impact for Chickens achieved through their programs?
  • Room For More Funding: How much additional money can Legal Impact for Chickens effectively use in the next two years?

Our review focuses on ACE's Cost-Effectiveness analysis, not on their Room For More Funding analysis. In the future, we may evaluate ACE's Room For More Funding Analysis, but that is not what our review focused on. We wanted to keep our review short enough that people could read it without a huge time investment, so we could not include an assessment of every single part of ACE's evaluation process in our review. 

It is also less reasonable to hold ACE accountable for their Room For More Funding analysis, since this is inherently more subjective and difficult to do. It is far easier for ACE (or any charity evaluator) to analyze historic cost-effectiveness than to analyze future cost-effectiveness. However, I would like to pose a question to you: Given the ACE often gives charities a worse historic cost-effectiveness rating for spending less money to achieve the exact same outcomes (see Problem 1), how confident do you feel in ACE's ability to analyze future cost-effectiveness?

My understanding is ACE has tried to do something that’s just cost-effectiveness analysis in the past (they used to give probability distributions for how many animals were helped, for example).

ACE responded to this thread acknowledging that the problems listed in our review needed to be addressed, and that they changed their methodology (to a cost-effectiveness calculation of simply impact divided by cost) to do so: 

I also felt like this felt pretty politically motivated. Not sure if that is your intention, but paragraphs like this:

ACE's recommendations determine which animal charities receive millions of dollars in donations.[1] Thus far, we have reviewed 5 of ACE's "Top 11 Animal Charities to Donate to in 2024" and only one of them (Shrimp Welfare Project) appears to be an effective charity for helping animals. ACE's poor evaluation process leads to ineffective charities receiving recommendations, and many animals are suffering as a result.

Without any evidence feels pretty intense. ACE is kind of low hanging fruit to pick on in the EA space, so this read to me like more of that, without necessarily the evidence base to back it. Reading your report, I felt kind of like "oh, there are interesting assumptions here, would be interested to learn more", and not "ACE is doing an extremely bad job." 

 

What claims did we make that we did not provide evidence for? 

Load more