Hide table of contents

Why I Still Think AW >> GH At The Margin

Last year, I argued that Open Phil (OP) should allocate a majority of its neartermist resources to animal welfare (AW) rather than global health (GH).

Most of the critical comments still agreed that AW > GH at the margin:

  • Though Carl Shulman was unmoved by Rethink Priorities' Moral Weights Project, he's still "a fan of animal welfare work relative to GHW's other grants at the margin because animal welfare work is so highly neglected".
  • Though Hamish McDoodles thinks neuron count ratios are a better proxy for moral weight than Rethink's method, he agrees that even if neuron counts are used, "animal charities still come out an order of magnitude ahead of human charities".

I really appreciate OP for their engagement, which gave some helpful transparency about where they disagree. Like James Özden, I think it's plausible that even OP's non-animal-friendly internal estimates still imply AW > GH at the margin. (One reason to think this is that OP wrote that "our current estimates of the gap between marginal animal and human funding opportunities is…within one order of magnitude, not three", when they could have written "GH looks better within one order of magnitude".)

Even if that understanding is incorrect, given that OP agrees that "one order of magnitude is well within the 'margin of error'", I still struggle to understand the rationale behind OP funding GH 6x as much as AW. Though I appreciate OP explaining how their internal estimates differ, the details of why their estimates differ remain unknown. If GH is truly better than AW at the margin, I would like nothing more than to be persuaded of that. While I endeavor to keep an open mind, it's difficult for me and many community members to update without knowing OP's answers to the headline questions:

  • How much weight does OP's theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?
  • Precisely how much more does OP value one unit of a human's welfare than one unit of another animal's welfare, just because the former is a human? How does OP derive this tradeoff?
  • How would OP's views have to change for OP to prioritize animal welfare in neartermism?

OP has no obligation to answer these (or any) questions, but I continue to think that a transparent discussion about this between OP and community leaders/members would be deeply valuable. This Debate Week, the EA Leaders Forum, 80k's updated rankings, and the Community Survey have made it clear that there's a large gap between the community consensus on GH/AW allocation and OP's. This is a question of enormous importance for millions of people and trillions of animals. Anything we can do to get this right would be incredibly valuable.

Responses to Objections Not Discussed In Last Year's Post

Could GH > AW When Optimizing For Reliable Ripple Effects?

Richard Chappell has argued that while "animal welfare clearly wins by the lights of pure suffering reduction", GH could be competitive with AW when optimizing for reliable ripple effects like long-term human population growth or economic growth.

AW Is Plausibly More Robustly Good Than GH's Ripple Effects

I don't think it's obvious that human population growth or economic growth are robustly good. Historically, these ripple effects have had even larger effects on farmed and wild animal populations:

  • Humanity-caused climate change and land use have contributed to a loss of 69% of wildlife since 1970.
  • The number of farmed fish has increased by nearly 10x since 1990.
  • Brian Tomasik has estimated that each dollar donated to AMF prevents 10,000 invertebrate life-years by reducing invertebrate populations.

Trying to account for all of these AW effects makes me feel rather clueless about the long-term ripple effects of GH interventions. In contrast, AW interventions such as humane slaughter seem more likely to me to be robustly good. While humane slaughter may slightly reduce demand for meat due to increased meat prices, it is unlikely to affect farmed or wild animal populations nearly as much as economic growth or human population growth would.

Implications of Optimizing for Reliable Ripple Effects in GH

Vasco Grilo points out that longtermist interventions like global priorities research and improving institutional decisionmaking seem to be better for reliable long-term ripple effects than GiveWell Top Charities. It would be surprising if the results of GiveWell's process, which optimizes for the cheapest immediate QALYs/lives saved/income doublings, would also have the best long-term ripple effects.

Rohin Shah suggests further implications of optimizing for reliable ripple effects:

  1. Given an inability to help everyone, you'd want to target interventions based on people's future ability to contribute. (E.g. you should probably stop any interventions that target people in extreme poverty.)
  2. You'd either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
  3. You'd want to invest more in education than would be suggested by typical metrics like QALYs or income doublings.

I think it's plausible that some AW causes, such as moral circle expansion, could also rank high on the rubric of reliable ripple effects.

In summary, it seems that people sympathetic to Richard's argument should still be advocating for a radical rethinking of almost all large funders' GH portfolios.

What if I'm a Longtermist?

Some of my fellow longtermists have been framing this discussion by debating which of GH or AW is the best for the long-term future. If the debate is framed this way, it collapses to comparing between longtermist interventions which could be characterized as GH and those which could be characterized as AW:

This doesn't seem like a useful discussion if the debate participants would all privately prefer that the 100M simply be allocated to unrestricted longtermism.

Instead, I think we would all learn more from the debate if it were instead framed within the context of neartermism. Like OP and the Navigation Fund, I think there are lots of reasons to allocate some of our resources to neartermism, including worldview diversification, cluelessness, moral parliament, risk aversion, and more. If you agree, then I think it would make more sense to frame this debate within neartermism, because that's likely what determines each of our personal splits between our GH and AW donations.

73

10
3

Reactions

10
3

More posts like this

Comments10
Sorted by Click to highlight new comments since:

I upvoted this at first, then changed my mind and downvoted because I find this below argument pretty chilling. Maybe some clarification is needed?

"I don't think it's obvious that human population growth or economic growth are robustly good. Historically, these ripple effects have had even larger effects on farmed and wild animal populations:"

On a surface level arguing that economic growth is bad seems problematic. If we assume that economic growth is the best way out of large scale poverty and human suffering (as it seems to be), on a basic level does this mean that you would favour keeping billions of humans in this state in order to minimise animal suffering? Real question @Ariel Simnegar 🔸.

Looking a little deeper, arguing that growth is bad for animal welfare reasons also seems unfair. High income countries have already benefited hugely from economic growth, and through that process we caused that climate change and the mass suffering of animals you speak of. For us, the mega rich with our disposable cash to, to turn around after messing these things up and say we should now deny other humans the opportunity to grow and develop while we focus for the moment on reducing animal suffering that we ourselves caused seems grossly unfair even if it makes some utilitarian sense - it sends shivers down my spine.

"Trying to account for all of these AW effects makes me feel rather clueless about the long-term ripple effects of GH interventions. In contrast, AW interventions such as humane slaughter seem more likely to me to be robustly good."

I like the argument that EA should spend the next 100 million on improving animal welfare, because that might be the best marginal use of money right now as both the world in general and EA neglect it so much, but I really don't like any argument that is anti-growth and wellbeing of poor countries and poor people because based on priors that growth might well mean more factory farming. I think its gross.

Hey Nick, thanks for the thoughtful comment! I'm going to answer your question, but I'll start with a bunch of caveats :)

  1. On first-order effects on human wellbeing, I think economic growth is obviously incredibly good.
  2. Even when including knock-on effects on farmed animals, wild animals, and the far future, I'd still bet on economic growth being good, with far higher uncertainty. I am very pro economic growth, but like you, I just think the marginal $100m from the debate question would be better spent on animal welfare.
  3. $100m represents 0.02% of the US's annual philanthropic budget. If we were debating allocating trillions of dollars, I would categorically not go all-in on animal welfare, and would consider it a given that we should use much of that to alleviate global poverty and encourage economic growth.

It seems to me that there are two places where you find the argument alienating. I'll address them one by one, and then I'll answer your question.

Fairness

I agree that it's unfair that people from less fortunate countries have been left behind. But I feel like the argument that "it's not fair to those in poverty if we donate to alleviating factory farming, which those in developed countries primarily cause", is similar to saying "it's not fair to those in poverty if we donate to alleviating climate change, which those in developed countries primarily cause". We don't have to go all-in on one cause and not support any other! These are all important problems which deserve some of our resources.

Making Tradeoffs

There was a time when I too thought it would be absurd to allocate money to animals when we could be helping the economically worst-off humans. But for a thought experiment, imagine that for every dollar of world GDP, there's a person being tortured right now. I think that would in and of itself be sufficient to make economic growth a bad thing---surely adding a dollar to world GDP can't be worth causing an additional person to be tortured.

So there is some amount of suffering which, if we knew economic growth caused it, would be enough for economic growth to be a bad thing. If we agree on that, then the debate reduces to whether these animal effects could plausibly be enough for that. At first, my gut instinct was a "hell no". But then I watched Dominion, a documentary I recommend to you, Nick, if you'd like to learn more about the horrors of factory farming.

When I think about the fact that there are trillions of animals, perhaps thousands for each one of us humans, who are suffering horribly in factory farms right now because of us, I feel an enormous moral weight. And our economic growth has indeed contributed to that suffering. It's contributed to many incredibly good things too, such that I'm not sure about its overall sign. But I now think the burden of this suffering is sufficiently weighty to potentially play a pivotal role in the net effect of economic growth.

So to answer your question, we don't yet know enough, and it depends on the specifics. But I am willing to say that there's some amount of animal suffering for which I would be willing to stall economic growth, if we knew all of the relevant details. And I don't think it's obvious that current levels of animal suffering today are below that threshold.

I get the idea here, but I still think this is a dangerous and disturbing line of argument. I think there are so many ways that we can reduce animal suffering while still encouraging  economic growth which lifts people out of poverty.

I just don't buy that "economic growth" in particular causes animal suffering - so I don't agree on that. Its not written in the stars that factory farming has to accompany growth. There are worlds where things could be different. Enlightened High income countries could make aid dependent on no factory farming. Local movements could rise up, passonate about the issue and stopping the factory farming transition. Sure these things are unlikely but far from implausible.

I also think if people really, to maintain integrity here they could consider putting a lot of their money (and perhaps their entire life direction) where their mouth through donating a lot of money towards preventing the transition to factory farming in developing countries, or even moving there and fighting for it themselves.

I'm of the (probably unpopular) school of thought that if any human is willing to basically hurt other worse off humans in order to gain any particular goal (in this case reducing animal suffering), they should be willing to sacrifice a lot themselves in order to achieve that.

I understand the concern about wondering whether growth is actually good since it allows a large expansion of factory farming. It can seem gross indeed, and unfair.

But given the terrible amount of suffering that factory farming allows - and the simple fact that animals are much more numerous than humans - I don't think we can rule out the fact that the positive effects of growth are negated by the suffering caused on other beings.

It is an uncomfortable question. I really don't like asking myself this. But if you put it in other terms, any action that leads to putting billions of being in cages so small they barely can turn around is a strong way to offset any other positive aspects.

I'm not sure in what terms this topic should be debated. Obviously it would be better if growth could happen without causing this suffering. But running the calculations, the negative aspects of growth are just very strong (although impacts on wild animal suffering make it unclear).

I can't speak for OP but I thought the whole point of its "worldview diversification buckets" was to discourage this sort of comparison by acknowledging the size of the error bars around these kind of comparisons, and that fundamentally prioritisation decisions between them are influenced more by different worldviews rather than the possibility of acquiring better data or making more accurate predictions around outcomes. This could be interpreted as an argument against the theme of the week and not just this post :-)

But I don't think neuron counts are by any means the most unfavourable [reasonable] comparison for animal welfare causes: the heuristic that we have a decent understanding of human suffering and gratification whereas the possibility a particular intervention has a positive or negative or neutral impact on the welfare of a fish is guesswork seems very reasonable and very unfavourable to many animal related causes (even granting that fish have significant welfare ranges and that hedonic utiitarianism is the appropriate method for moral resource allocation). And of course  there are non-utilitarian moral arguments in favour of one group of philanthropic causes or another (prioritise helping fellow moral beings vs prioritise stopping fellow moral beings from actively causing harm) which feel a little less fuzzy but aren't any less contentious.

There are also of course error bars wrapped around individual causes within the buckets, which is part of the reason why GHW funds both GiveWell recommended charities and neartermist policy work that might affect more organism life years per dollar than Legal Impact for Chickens (but might actually be more likely to be counterproductive or ineffectual)[1] but that's another reason why I think blanket comparisons are unhelpful. A related issue is that it's much more difficult to estimate marginal impacts of research and policy work than dispensing medicine or nets. The marginal impact of $100k more nets is easy to predict; the marginal impact of $100k more to a lobbying organization is not even if you entirely agree with the moral weight they apply to their cause, and average cost-effectiveness is not always a reliable guide to scaling up funding, particularly not if they're small, scrappy organizations doing an admirable job of prioritising quick wins and also likely to face increase opposition if they scale.[2] Some organizations which fit that bill fit in the GHW category, but it's much more representative of the typical EA-incubated AW cause. Some of them will run into diminishing returns as they run out of companies actually willing to engage with their welfare initiatives, others may become locked in positional stalemates, some of them are much more capable of absorbing significant extra funding and putting it to good use than others. Past performance really doesn't guarantee future returns to scale, and some types of organization are much more capable of achieving it than others, which happens to include many of the classic GiveWell type GHW charities, and not many of the AW or speculative "ripple effect" GHW charities[3] 

I guess there are sound reasons why people could conclude that AW causes funded by OP were universally more effective than GHW ones or vice versa, but those appear to come more from strong philosophical positions (meat eater problems or disagreement with the moral relevance of animals) than evidence and measurement. 

  1. ^

    For the avoidance of doubt, I'm acknowledging that there's probably more evidence about negative welfare impacts of practices Legal Impact for Chickens is targeting and their theory of change than of the positive welfare impacts and efficacy of some reforms promoted in the GHW bucket , even given my much higher level of certainty about the significance of the magnitude of human welfare. And by extension pointing out that sometimes comparisons between individual AW and GHW charities run the opposite way from the characteristic "AW helps more organisms but with more uncertainty" comparison.

  2. ^

    There are much more likely to be well-funded campaigns to negate the impact of an organization targeting factory farming than ones to negate the impact of campaigns against malaria . Though on the other hand, animal cruelty doesn't have as many proponents as the other side of virtually any economic or institutional reform debate.

  3. ^

    There are diminishing returns to healthcare too: malaria nets' cost-effectiveness is broadly proportional to malaria prevalence. But that's rather more predictable than the returns to scale of anti-cruelty lobbying, which aren't even necessarily positive beyond a certain point if the well-funded meat lobby gets worried enough.

Thanks for the comment, David.

I can't speak for OP but I thought the whole point of its "worldview diversification buckets" was to discourage this sort of comparison by acknowledging the size of the error bars around these kind of comparisons, and that fundamentally prioritisation decisions between them are influenced more by different worldviews rather than the possibility of acquiring better data or making more accurate predictions around outcomes. This could be interpreted as an argument against the theme of the week and not just this post :-)

The necessity of making funding decisions means interventions in animal welfare and global health and development are compared at least implicitly. I think it is better to make them explicit for reasoning transparency, and having discussions which could eventually lead to better decisions. Saying there is too much uncertainty, and there is nothing we can do will not move things forward.

the possibility a particular intervention has a positive or negative or neutral impact on the welfare of a fish is guesswork seems very reasonable and very unfavourable to many animal related causes

What do you think about humane slaughter interventions, such as the electrical stunning interventions promoted by the Centre for Aquaculture Progress? "Most sea bream and sea bass today are killed by being immersed in an ice slurry, a process which is not considered acceptable by the World Organisation for Animal Health". "Electrical stunning reliably renders fish unconscious in less than one second, reducing their suffering". Rough analogy, but a human dying in an electric chair suffers less than one dying in a freezer?

Relatedly, I estimated the Shrimp Welfare Project’s Humane Slaughter Initiative is 43.5 k times as cost-effective as GiveWell's top charities. I would be curious about which changes to the parameters you would make to render the ratio lower than 1.

there are non-utilitarian moral arguments in favour of one group of philanthropic causes or another (prioritise helping fellow moral beings vs prioritise stopping fellow moral beings from actively causing harm) which feel a little less fuzzy but aren't any less contentious.

Why should one stop at the level of helping people in low income countries (via global health and development interventions)? Family and friends are closer to us, and helping strangers in far away countries is way more contentious than helping family and friends. Does this mean Dustin Moskovitz and Cari Tuna (the funders of Open Philanthropy) should direct most of their resources to helping their families and friends? It is their money, so they decide, but I am glad they are using the money more cost-effectively.

I guess there are sound reasons why people could conclude that AW causes funded by OP were universally more effective than GHW ones or vice versa, but those appear to come more from strong philosophical positions (meat eater problems or disagreement with the moral relevance of animals) than evidence and measurement. 

One does not need to worry about the meat eater problem to think the best animal welfare interventions are way more cost-effective than the best in global health and development. Neglecting that problem, I estimated corporate campaigns for chicken welfare are 1.51 k times as cost-effective as GiveWell's top charities, and Shrimp Welfare Project's Humane Slaughter Initiative is 43.5 k times as cost-effective as GiveWell's top charities.

Thanks for sharing both the original and this version of the argument!

I realize this is basically an aside and doesn't really affect your bottom line, but I don't think you can draw this inference:

Humanity-caused climate change and land use have contributed to a loss of 69% of wildlife since 1970.

Quoting Our World In Data:

the LPI does not tell us the number of species, populations or individuals lost; the number of extinctions that have occurred; or even the share of species that are declining. It tells us that between 1970 and 2018, on average, there was a 69% decline in population size across the 31,821 studied populations.

This paper also argued the methodology is systematically biased downwards, but I haven't evaluated it.

The LPI indicates that vertebrate populations have decreased by almost 70% over the last 50 years. This is in striking contrast with current studies based on the same population time series data that show that increasing and decreasing populations are balanced on average. Here, we examine the methodological pipeline of calculating the LPI to search for the source of this discrepancy. We find that the calculation of the LPI is biased by several mathematical issues which impose an imbalance between detected increasing and decreasing trends and overestimate population declines.

Your initial post claimed that RP thought AW was 1000x more effective than GHD. I just thought I'd flag that in their subsequent analyses, they have reported much lower numbers. In this report, (if i'm reading it right), they put a chicken campaign at ~1200 and AMF at ~20, a factor of 60, much lower than 1000x, disagreeing greatly with Vasco's analysis which you linked. (ALl of these are using Saulius numbers and RP's moral weights). 

If you go into their cross cause calculator, the givewell bar is ~20 while the generic chicken campaign gives ~700 with default parameters, making AW only 35 times as effective as GHD.

I've been attempting replicating the results in the comments of this post, and my number comes up higher as ~2100 vs ~20, making AW 100 times as effective. Again, these are all using Saulius report and RP's moral weights: if you disagree with these substantially GHD might come out ahead. 

Hi! As you point out, the 1000x multiplier I quoted comes from Vasco's analysis, which also uses Saulius's numbers and Rethink's moral weights.

The cross cause calculator came out about two weeks before I published my initial post. By then, I'd been working on that post for about seven months. Though it would have been a good idea, given my urge to get the post published, I didn't consider checking the cross cause calculator's implied multiplier before posting.

I've just spent some time trying to figure out where the discrepancy between Vasco's multiplier and the cross cause calculator's multiplier comes from:

  • They roughly agree on the GHD bar of ~20 DALYs per $1000.
  • Fixing a constant welfare range versus a probablistic range doesn't seem to make a huge difference for the calculator's result.
  • The main difference seems to be that the cross cause calculator assumes corporate campaigns avert between 160 and 3.6k chicken suffering-years per dollar. I don't know the precise definition of that unit, and Vasco's analysis doesn't place intermediate values in terms of that unit, so I don't know exactly where the discrepancy breaks down from there. However, there's probably at least an order of magnitude difference between Vasco's implied chicken suffering-years per dollar and the cross cause calculator's.

My very tentative guess is that this may be coming from Vasco's very high weightings of excruciating and disabling-level pain, which some commenters found unintuitive, and could be driving that result. (I personally found these weightings quite intuitive after thinking about how I'd take time tradeoffs between these types of pains, but reasonable people may disagree.)

It could also be that Rethink is using a lower Saulius number to give a more precise marginal cost-effectiveness estimate, even if the historical cost-effectiveness was much higher. That would be consistent with Open Phil's statement that they think the marginal cost-effectiveness of corporate campaigns is much lower than the historical average.

I think this is a great find, and I'm very open to updating on what I personally think the animal welfare vs GHD multiplier is, depending on how that discrepancy breaks down. I do think it's worth noting that every one of these comparisons still found animal welfare orders of magnitude better than GHD, which is the headline result I think is most important for this debate. But your findings do illustrate that there's still a ton of uncertainty in these numbers.

(@Vasco Grilo🔸 I'd love to hear your perspective on all of this!)

Thanks for the discussion, titotal and Ariel!

I have played around with Rethink Priorities' (RP's) cross-cause cost-effectiveness model (CCM), but I have not been relying on its results. The app does not provide any justification for the default parameters, so I do not trust these.

titotal, I would be curious to know which changes you would make to my cost-effectiveness estimates of corporate campaigns for chicken welfare (1.51 k times as cost-effective as GiveWell's top charities) and Shrimp Welfare Project's Humane Slaughter Initiative (HSI; 43.5 k times as cost-effective as GiveWell's top charities) to make them worse than that of GiveWell's top charities.

They roughly agree on the GHD bar of ~20 DALYs per $1000.

The CCM says GiveWell's bar is 0.02 DALY/$ (as above), but I think it is around 0.01 DALY/$. According to Open Philanthropy, “GiveWell uses moral weights for child deaths that would be consistent with assuming 51 years of foregone life in the DALY framework (though that is not how they reach the conclusion)”. GiveWell's top charities save a life for around 5 k$, so their cost-effectiveness is around 0.01 DALY/$ ( = 51/(5*10^3)). Am I missing something, @Derek Shiller?

My very tentative guess is that this may be coming from Vasco's very high weightings of excruciating and disabling-level pain, which some commenters found unintuitive, and could be driving that result. (I personally found these weightings quite intuitive after thinking about how I'd take time tradeoffs between these types of pains, but reasonable people may disagree.)

Yes, I think this is a big part of it. From RP's report on How Can Risk Aversion Affect Your Cause Prioritization? (published in November 2023):

  • 1 year of annoying pain = 0.01 to 0.02 DALYs
  • 1 year of hurtful pain = 0.1 to 0.25 DALYs
  • 1 year of disabling pain = 2 to 10 DALYs
  • 1 year of excruciating pain = 60 to 150 DALYs

Using the geometric mean of each of the ranges, I conclude HSI is 48.8 times as cost-effective as GiveWell's top charities, i.e. 0.112 % (= 48.8/(43.5*10^3)) as high as originally. I think RP's assumptions underestimate the badness of severe pain. If 1 year of excruciating pain is equivalent to 94.9 DALY (= (60*150)^0.5), 15.2 min (= 24*60/94.9) of excruciating pain neutralise 1 day of fully healthy life, whereas I would say adding this much pain to a fully healthy life would make it clearly negative. Here is how the Welfare Footprint Project defines excruciating pain (emphasis mine):

All conditions and events associated with extreme levels of pain that are not normally tolerated even if only for a few seconds. In humans, it would mark the threshold of pain under which many people choose to take their lives rather than endure the pain. This is the case, for example, of scalding and severe burning events. Behavioral patterns associated with experiences in this category may include loud screaming, involuntary shaking, extreme muscle tension, or extreme restlessness. Another criterion is the manifestation of behaviors that individuals would strongly refrain from displaying under normal circumstances, as they threaten body integrity (e.g. running into hazardous areas or exposing oneself to sources of danger, such as predators, as a result of pain or of attempts to alleviate it). The attribution of conditions to this level must therefore be done cautiously. Concealment of pain is not possible.

The global healthy life expectancy in 2021 was 62.2 years, so maybe one can roughly say that a child taking their live due to excruciating pain would loose 50 years of fully healthy life. Under my assumptions, 0.864 s of excruciating pain neutralise 1 day of fully healthy life, so 4.38 h (= 0.864*50*365.25/60^2) of excruciating pain neutralise 50 years of fully healthy life. However, I guess many people take their lives (if they can) after a few seconds (not hours) of excruciating pain. So, even if people should hold excruciating pain a few orders of magnitude longer to maximise their own welfare, my numbers could still make sense. 4.38 h is 5.26 k (= 4.38*60^2/3) times as long as 3 s (a few seconds). One complication is that people may be maximising their welfare in taking their lives because excruciating pain quickly decreases their remaining healthy life expectancy, such that there is a decreased opportunity cost of taking their lives.

I think this is a great find, and I'm very open to updating on what I personally think the animal welfare vs GHD multiplier is, depending on how that discrepancy breaks down. I do think it's worth noting that every one of these comparisons still found animal welfare orders of magnitude better than GHD, which is the headline result I think is most important for this debate. But your findings do illustrate that there's still a ton of uncertainty in these numbers.

Agreed!

Curated and popular this week
Relevant opportunities