When you comment on your vote on the debate week banner, your comment will appear on this thread. Use this thread to respond to other people's arguments, and discuss the debate topic. 

You should also feel free to leave top-level[1] comments here even if you haven't voted. As a reminder, the statement is "It would be better to spend an extra $100m on animal welfare than on global health"

If you’re browsing this thread- consider sorting by “New” and interacting with posts that haven’t been voted or commented on yet. There are a lot of comments!
Also- perhaps don’t vote karma below zero for low effort submissions, we don’t want to discourage low effort takes on the banner. 

  1. ^

    The first comment in a thread is a top-level comment. 

Comments380
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A few theses that may turn into a proper post:
 

1. Marginal animal welfare cost effectiveness seems to robustly beat global health interventions. It may look more like 5x or 1000x but it is very hard indeed to get that number below 1 (I do think both are probably in fact good ex ante at least, so think the number is positive).

To quote myself from this comment


@Laura Duffy's (for Rethink Priorities) recently published risk aversion analysis basically does a lot of the heavy lifting here (bolding mine):

Spending on corporate cage-free campaigns for egg-laying hens is robustly[8] cost-effective under nearly all reasonable types and levels of risk aversion considered here. 

  1. "Using welfare ranges based roughly on Rethink Priorities’ results, spending on corporate cage-free campaigns averts over an order of magnitude more suffering than the most robust global health and development intervention, Against Malaria Foundation. This result holds for almost any level of risk aversion and under any model of risk aversion."

2. The difference in magnitude of cost effectiveness (under any plausible understanding of what that means) between MakeAWish (or personal consumption spendi... (read more)

Marginal animal welfare cost effectiveness seems to robustly beat global health interventions. ... Using welfare ranges based roughly on Rethink Priorities’ results

I don't think this is as robust as it seems. One could easily have moral weights many orders of magnitude away from RP's. For example, if you value one human more than the population of one beehive that's three orders of magnitude lower than what RP gives (more)

7
CB🔸
The question is, how do you generate these weights otherwise ? The issue is, the way I seen most people do it is basically go "the conclusion that animals have a similar capacity for pain than humans feels wrong, so, hm, let's say that they morally weight 1000 or 10000 times less". It's often conveniently in the range where people don't have to change their behavior about the topic. I'm skeptical of that. For most people, the beehive example invokes a response close to 'oh this feels wrong so the conclusion must be wrong'. They don't consider the option 'wow, despite being small, maybe bees have a capacity to feel love, and pleasure when they find flowers and make honey and danse, and feel pain when their organs are destroyed by pesticides', which may be also likely. RP's work is the most complete work I've seen on this topic, comparatively.
4
David Mathers🔸
Bees feel like an easy case for thinking RP might be wildly wrong in a way that doesn't generalise to all animal interventions, since bees might not be conscious at all, whereas it's much less likely that pigs or even chickens aren't. (I'm actually a bit more sympathetic to pigs not being conscious than most people are, but I still think its >50% likely that they are conscious enough to count as moral patients.) 
9
Angelina Li
"So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF." I hadn't considered this idea before, am interested in you writing something up here! I'm a bit confused how tractable it is to shift donors from AMF -> AW fund versus [Other charity] -> AMF, but my intuition is the first might be fairly tractable.
4
Ariel Simnegar 🔸
I've run into a similar dilemma before, where I'm trying to convince non-EAs to direct some of their charity to AMF rather than their favorite local charity. I believe animal welfare charities are orders of magnitude more cost-effective than AMF, so it's probably higher EV to try to convince them to direct that charity to e.g. THL rather than AMF. But that request is much less likely to succeed, and could also alienate them (because animal welfare is "weird") from making more effective donations in the future. Curious about your thoughts about the best way to approach that.
6
CB🔸
Another option, if they're sensible to the environment, is to redirect them to charities that are also impactful for sustainability, such as The Good Food Institute. According to the best guess by Giving Green, they can avoid 17 tons of CO2eq for 50$. This way, they can make a positive contribution for the environment (not to mention the positive impact on human health pandemics). I've done it for a charity that does similar stuff in my country and at the very least people didn't give any pushback and seemed comprehensive. You can mention concrete stuff about the progress of alternative proteins like they're the default choice at burger king.
6
Jason
I have a sense that there could be a mutually beneficial trade between cause areas lurking in this kind of situation, but it would be tricky to pull off as a practical manner.  One could envision animal-welfare EAs nudging non-EA donors toward GiveWell-style charities when they feel that is the highest-EV option with a reasonable probability of success, and EA global-health donors paying them a "commission" of sorts by counterfactually switching some smaller sum of their own donations from GH to AW.   In addition to challenges with implementation, there would be a potential concern that not as much net money is going to GH as the non-EA donor thinks. On the other hand, funging seems to be almost an inevitable part of the charitable landscape whether it is being done deliberately or not.
5
Ben Millwood🔸
Yeah, this seems a little... sneaky, for want of a better word. It might be useful to imagine how you think the non-EA donors would feel if the "commission" were proactively disclosed. (Not necessarily terribly! After all, fundraising is often a paid job. Just seems like a useful intuition prompt.)
1
Stijn
"So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF." More generally, I think it is more important to convince an EA human health and development supporter to diversify and donate say 50% of the donation budget to the most effective animal welfare causes, than to convince a non-EA human charity supporter to diversify and donate say 50% of the donation budget to AMF or similar high-impact human-focused charities.

Disclaimer: I'm funded by EA for animal welfare work.

Some thoughts:

a. So much of the debate feels like a debate on identities and values. I'd really love to see people nitpicking into technical details of cost-effectiveness estimates instead.

b. I think it's worth reminding that animal welfare interventions are less cost-effective than they were when Simcikas conducted his analysis.

c. I generally feel much more comfortable standing behind Givewell's estimates but Givewell doesn't analyse cost-effectiveness of advocacy work. My biggest misgivings about cost-effectiveness estimates are due to the difficulty of assessing advocacy work. I think we should make a lot more progress on this.

d. People seem to keep forgetting that uncertainty cuts both ways. If the moral worth of animals is too uncertain, that is also a reason against confidently dismissing them.

e. I don't think we have made much progress on the question of "How much important is cage to cage-free transition for a chicken in terms of human welfare?". I don't think Rethink Priorities Welfare ranges answer that question. In general I'm confused about the approach of trying to find overall welfare capacities of different species... (read more)

5
Vasco Grilo🔸
Nice points, Emre! Uncertainty also means a higher cost-effectiveness of animal welfare research which tries to decrease the uncertainty, given the high value of information.
1
Mo Putera
Admittedly I haven't been following work on animal welfare cost-effectiveness analysis closely, but this is news to me; can you point me to further readings on this? I agree with the need for the latter; I'm thinking in particular of Animal Ask's systematic review finding "insufficient evidence to break down overall policy success into the baseline rate of success and the counterfactual impact of lobbying". I default to the evaluative framework in Founders Pledge's guide to evaluating policy advocacy organisations but would be keen to learn how to improve upon it.  re: the former, here are some GiveWell policy advocacy-related CEAs: * 2017 CEA of the Centre for Pesticide Suicide Prevention (grant writeup, 2018 blog post where they explained their reasoning in considering policy advocacy orgs in general), and 2021 skeleton BOTEC of the same org (grant writeup) * 2021 BOTEC of Vital Strategies (grant writeup) to reduce harms of excessive alcohol consumption in LMICs.  For both BOTECs GiveWell explicitly mentioned that they rely"on a number of difficult best-guess assumptions and judgment calls about modeling structure. It therefore contains less information value than cost-effectiveness estimates for our top charities, which limits its comparability", so I'm not sure you'd feel as comfortable standing behind these estimates as with the top charity CEAs. And none of the models address the counterfactual estimation issue Animal Ask identified, again at a quick skim—correct me if I'm wrong on this. (None of this changes my general sense that funding top animal welfare interventions are more cost-effective on the margin than GHW.)
1
emre kaplan🔸
This seems to be a representative publicly available estimate from 4 years ago by Lewis Bollard: "This is a major question for us, and one we continue to research. Our current very rough estimate is that our average $ spent on corporate campaigns and all supporting work (which is ~40% of our total animal grant-making) achieves the equivalent of ~7 animals spared a year of complete suffering. We use this a rough benchmark for BOTECs on  new grants, and my best guess is this reflects roughly the range we should hope for the last pro-animal dollar. " I think several more up to date estimates will be available soon. For advocacy evaluation, a concrete area for improvement is the following. Saulius's analysis has a really nice section titled "Ways this estimate could be misleading". Other advocates cite concerns similar to those when they argue against corporate welfare campaigns. They usually don't have empirical evidence, but I don't have super strong evidence to show them wrong either. I'm not very happy about that.
1
Mo Putera
Thanks for the pointers, much appreciated.  What did you think of the GiveWell policy advocacy CEAs & BOTECs I linked? I shared them in response to your "...but Givewell doesn't analyse cost-effectiveness of advocacy work" so I wondered if you had a different take.
2
emre kaplan🔸
I appreciate the correction. When I said "I generally feel much more comfortable standing behind Givewell's estimates" that was for their main page recommendations. I currently won't prioritise reviewing these BOTECS in detail in the short term but as a future exercise I will look into the linked analyses and compare them to animal welfare ones.

In the abstract I think this would be good, but I'm skeptical that there are great opportunities in the animal space that can absorb this much funding right now! This is like, doubling the EA funds going to animal welfare stuff. I think I would strongly agree with claims like:

  • Conditional on there being several years of capacity build up, animal welfare would use the funds more effectively.
  • From a pure EA lens, some animal welfare spending is many times more cost-effective than the most effect global health interventions.
  • The current most effective $100M spent on animal welfare is more cost-effective than the current most effective $100M spend on global health.

I think something that would be closer to 50/50 for me (or I haven't thought about it actually, but on its face seem closer to a midpoint):

  • It would be better to invest an extra $100M to spend on animal welfare in the future than spending it on global health now.

I'd strongly disagree with a claim like:

  • It would be better to spend an extra $100M in the next two years on animal welfare than on global health

So I listed myself as strongly agreeing, but with all these caveats.

The footnote says that the money can be spent "over any time period", so I think this would allow for several years of more capacity buildup and research to spend this effectively. 

Given this precision, I think the claim should be close to something you agree on, if I understood correctly. 

4
abrahamrowe
Yep, I voted strongly agree from seeing that, though I wouldn't necessarily agree with the non-footnoted version, and without all these caveats.
6
MichaelStJules
What do you mean by "invest" here? Like financially, or capacity building or anything? If investing includes capacity building, shouldn't you strongly favour animal welfare (away from 50/50), consistent with the following claim? (There's also the issue of spending $100M on global health now vs spending it on global health over time or in the future, but I don't expect this to change the marginal cost-effectiveness of grants to GiveWell recommendations by >10x, unless we're going way out. Maybe there are better global health interventions that can absorb $100M over time than GiveWell recommendations, though.)
4
abrahamrowe
I meant more literally, put $100M in an investment account to save for good future animal opportunities vs spending on the best global health interventions today. I'm not certain it's actually a 50/50 item, but was trying to find a mid point.   I don't really know enough about global health work to say - but I'd guess there are some novel medical things seem plausibly able to: * Appear over the next few decades * Require a lot of cash to scale up * Could be really cost-effective
3
Mo Putera
Do any of these megaproject suggestions change your mind? Some of them could absorb amounts of funding potentially nearing or exceeding that $100M bar just by themselves, e.g. the advance market commitments for alt proteins idea (cf. the $925M carbon removal AMC Stripe led), or subsidizing alternatives to conventionally produced meat, or funding think tanks to do policy research at scale for which we (quote) "could spend £100m+ easily on this", or funding "10+ very large RCTs/population-wide studies, especially in Asia" (many ideas in the list), or "Healthier Hens x1000" as one example of many in the list of "GiveDirectly for animals: reasonably cost-effective, massively scalable, very strong evidence-base, and almost guaranteed impact", etc. 
8
abrahamrowe
Not really, primarily because I don't think the animal welfare world currently has the organizational competency to do any of them successfully at that scale, and not shoot itself in the foot while doing so, with the potential exception of the advance market commitments. I don't think the existing groups have the organizational competency to handle the ~$200M they already receive well, and think the majority that money is already being spent in expectedly worse ways than giving to GiveWell top charities, even if the best animal stuff is incredibly cost-effective. I think that the movement could get there at some point. But if I imagine that much money going to any existing group to be spent in the next 2 years I think it would mostly be wasted. I think many of these ideas seem feasible in the longrun, and are viable candidates for what to try, though I just generally think that farmed animal welfare is significantly less tractable than wild animal welfare or invertebrate welfare in the longrun, so would rather the funds went to scaling those fields instead of farmed animal welfare. Also, it is not obvious to me that lots of these ideas will beat out global health charities, though I think blue sky thinking is good. Also just generally, most of those ideas are ones that don't need to be implemented at scale? E.g. Healthier Hens doesn't seem like it has been able to demonstrate that it is cost-effective to donors at a small scale. Why would scaling it up 1000x go better? It seems like if these ideas could absorb $100M, many could be tried now. The one that hasn't been tried at that scale is advance market commitments, but I think the track record for alternative proteins doesn't look great in general right now, and it isn't obvious to me that R&D is the main barrier — see the margarine issues.  I also generally think lots of untried ideas look good on paper, but will probably not end up being effective if tried. That doesn't mean we shouldn't try them, but I think
5
MichaelStJules
FWIW, I thought some interventions they were exploring looked potentially pretty cost-effective, near the bar for marginal animal welfare work, and with a ratio of 7 years of disabling chicken pain prevented per year of waking human life saved by GiveWell recommendations. See here. Healthier Hens has since shut down, though, and CE/AIM is looking to start a keel bone fracture charity with a different and much higher leverage strategy: certifier outreach. This probably can't absorb nearly as much funding, though.
2
abrahamrowe
Nice - that's good to know - I was under the impression that it was a good idea, but didn't get much traction. 
9
MichaelStJules
Ah, FWIW, the ideas that looked cost-effective were not related to keel bone fractures or based on feed fortification. Their feed trial ended up going badly for the hens.
2
Stijn
I think there is much room for more funding of alternative protein R&D, and that is very cost-effective to reduce farmed animal suffering

Non-moderator nudge: Given that most of the comments here are created via voting on the banner, I'd like to discourage people from downvoting comments below zero just for being low effort. I think it's still useful to leave a quick note in this case, so people can see them when browsing the banner. Hopefully positive karma will still do the job of sorting really good ones to the top.

6
Nathan Young
I didn't realise the comments were from that initially. Thanks.
3
nathanhb
Also, reminder to use the agree/disagree voting for whether or not you agree with a statement. Save karma voting for whether or not you believe the comment is contributing something meaningful / unique to the overall dialogue. Don't just karma upvote a bunch of similar statements you vaguely agree with, or karma downvote things you disagree with.

The animal welfare side of things feels less truthseeking, more activist, than other parts of EA.  Talk of "speciesim" that implies animals' and humans' lives are of ~equal value, seems farfetched to me.  People frequently do things like taking Rethink's moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth.

If I considered only the immediate, direct effects of $100m spent on animal welfare versus global health, I would probably side with animal welfare despite the concerns above.  But I'm also worried about the relative lack of ripple / flow-through effects from animal welfare work versus global health interventions -- both positive longer-term effects on the future of civilization generally, and more near-term effects on the sustainability of the EA movement and social perceptions of EA.  Going all-in on animal welfare at the expense of global development seems bad for the movement.

That's not what "speciesism" means. Speciesim isn't the view that an individual human matters more than animals, it's the view that humans matter more because they are human, and not because of some objectively important capacity. Singer who popularized the term speciesism (though he didn't invent it) has never denied that a (typical, non-infant) human should be saved over a single animal. 

Good to know!  I haven't actually read "Animal Liberation" or etc; I've just seen the word a lot and assumed (by the seemingly intentional analogy to racism, sexism, etc) that it meant "thinking humans are superior to animals (which is bad and wrong)", in the same way that racism is often used to mean "thinking europeans are superior to other groups (which is bad and wrong)", and sexism about men > women. Thus it always felt to me like a weird, unlikely attempt to shoehorn a niche philosophical position (Are nonhuman animals' lives of equal worth to humans?) into the same kind of socially-enforced consensus whereby things like racism are near-universally condemend.

I guess your definition of speciesism means that it's fine to think humans matter more than other animals, but only if there's a reason for it (like that we have special quality X, or we have Y percent greater capacity for something, therefore we're Y percent more valuable, or because the strong are destined to rule, or whatever).  Versus it would be speciesist to say that humans matter more than other animals "because they're human, and I'm human, and I'm sticking with my tribe".

Wikipedia's page on "speciesi... (read more)

7
abrahamrowe
I've definitely heard speciesism used both ways, but I think it's usually used without much reference to an exact view, but as a general "vibe" (which IMO makes it a not particularly useful word). But, I think people in the EA-side of the animal advocacy world tend to lean more toward the "it's discriminatory to devalue animals purely because they aren't a member of the human species" definition. I'd guess that most times its used, especially outside of EA, it's something more like the "it's discriminatory to not view all animals including humans as being of equal value" view but with a lot of fuzziness around it. So I'd guess it is somewhat context dependent on the speaker?
4
David Mathers🔸
Ok, maybe I was too fast to take the definition I remember from undergrad 20 years ago as the only one in use! 
1
AndrewDoris
I share your impression that it's often used differently in broader society and mainstream animal rights groups than it is by technical philosophers and in the EA space. I think the average person would still hear the word as akin to racism or sexism or some other -ism. By criticizing those isms, we DO in fact mean to imply that individual human beings are of equal moral value regardless of their race or sex. And by that standard, I'd be a proud speciesist, because I do think individual beings of some species are innately more valuable than others. We can split hairs about why that is - capacity for love or pain or knowledge or neuron count or whatever else we find valuable about a life - but it will still require you to come out with a multiplier for how much more valuable a healthy "normal" human is relative to a healthy normal member of other species, which would be absolutely anathema in the racial or sexual context.
8
akash 🔸
A few quick pushbacks/questions: 1. I don't think the perceived epistemic strength of the animal welfare folks in EA should have any bearing on this debate unless you think that nearly everyone running prominent organizations like Good Food Institute, Faunalytics, the Humane League, and others is not truth-seeking (i.e., animal welfare organizations are culturally not truth-seeking and consequently have shoddy interventions and goals). 2. To what extent do you think EA funding be allocated based on broader social perception? I think we should near-completely discount broader social perceptions in most cases. 1. The social perception point, which has been brought up by others, is confusing because animal welfare has broad social support. The public is negatively primed towards veganism but overwhelmingly positively so towards the general idea of not being unkind to (euphemism) farm animals. 3. "Going all-in on animal welfare at the expense of global development seems bad for the movement." —  I don't think this is being debated here though. Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)? Isn't $100 million a mere fraction of the yearly GHD budget?
5
Jackson Wagner
1. Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person I've talked to about this stuff.  However: 1. I don't have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesn't overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what I've run into personally. 2. In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaints -- one is "animal-welfare EA sometimes seems too 'activist' in a non-truthseeking way", and another is more like "I disagree with these folks about philosophical questions".  That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other... who's tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus who's doing it because they disagree with me about something akin to "experience size", or the importance  of sapience, or how good of an approximation it is to linearly "add up" positive experiences when the experiences are near-identical[1].  Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments?  (Of course one could also accuse me of being subconsciously biased in the opposite direction!  Philosophy is hard...) 1. All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although it's hard for me to tell for sure from the outside), but I would probably still have i
6
Ben Millwood🔸
This seems like a pretty natural thing to believe, but I'm not sure I hear coverage of EA talk about the global health work a lot. Are you sure it happens? (One interesting aspect of this is that I get the impression EA GH work is often not explicitly tied to EA, or is about supporting existing organisations that aren't themselves explicitly EA. The charities incubated by Charity Entrepeneurship are perhaps an exception, but I'm not sure how celebrated they are, though I'm sure they deserve it.)
6
OscarD🔸
I think philosophically it could be interesting whether if we were at 90% of neartermist EA funding going to animals whether we should move it all the way to 100%, but since this is very far from reality, I think practically we don't need to think/worry much about 'going all-in on animal welfare'. I think the Rethink people were suitably circumspect about their conclusions and the assumptions they made, but yes probably others have taken some claims out of context.
2
Jackson Wagner
Yeah, I wish they had clarified how many years the $100m is spread out over.  See my point 3 in reply to akash above.
4
Arepo
Fwiw I think total hedonic utilitarianism is 'ultimately correct' (inasmuch as that statement means anything), but nonetheless strongly agree with everything else you say.
5
Jackson Wagner
Excerpting from and expanding on a bit of point 1 of my reply to akash above.  Here are four philosophical areas where I feel like total hedonic utilitarianism (as reflected in common animal-welfare calculations) might be missing the mark: 1. Something akin to "experience size" (very well-described by that recent blog post!) 2. The importance of sapience -- if an experience of suffering is happening "all on its own", floating adrift in the universe with nobody to think "I am suffering", "I hope this will end soon", etc, does this make the suffering experience worse-than, or not-as-bad-as, human suffering where the experience is tied together with a rich tapestry of other conscious experiences?  Maybe it's incoherent to ask questions like this, or I am thinking about this in totally the wrong way?  But it seems like an important question to me.  The similiarities between layers of "neurons" in image-classifying AIs, and the actual layouts of literal neurons in the human retina + optical cortex (both humans and AIs have a layer for initial inputs, then for edge-detection, then for corners and curves, then simple shapes and textures, then eventually for higher concepts and whole objects) makes me think that possibly image-classifiers are having a genuine "experience of vision" (ie qualia), but an experience that is disconnected (of course) from any sense of self or sense of wellbeing-vs-suffering or wider understanding of its situation.  I think many animals might have experiences that are intermediate in various ways between humans and this hypothetical isolated-experience-of-vision that might be happening in an AI image classifier. 3. How good of an approximation is it to linearly "add up" positive experiences when the experiences are near-identical?  ie, there are two identical computer simulations of a suffering emulated mind, any worse than one simulation?  what about a single simulation on a computer with double-thick wires?  what about a simulation identical
2
Arepo
I don't have time to reply to all of these, but I think it's worth saying re point 1, that inasmuch as hedonism 'struggles' with this, it's because it's basically the only axiology to commit to addressing it at all. I don't consider that a weakness, since there clearly is some level of comparability between my stubbing my toe and my watching a firework.  Preference utilitarianism sort of ducks around this by equivocating between whether determining a preference requires understanding the happiness its satisfaction brings (in which case it has the same problem) or whether preferences rely on some even more mysterious forces with even weirder implications. I wrote much more on this equivocation here. Also re size specifically, he literally says size 'is closely analogous to the sense in which (if welfare is aggregable at all) one population can have more welfare than another due to its size. It's common to joke about 'hedons', but I see no reason one should both be materialist and not expect to find some minimum physical unit of happiness in conscious entities. Then the more hedons an entity has, the sizier its happiness would be. It's also possible that that we find multiple indivisible hedon-like objects, in which case the philosophy gets harder again gets harder (and at the very least, it's going to be tough to have an objective weighting between hedons and antihedons, since there's no a priori reason to assume it should be 1-to-1). But I don't think hedonists should have to assume the latter, or prove that it's not true.
1
Rían O.M
Can you point to specific cases of that happening? I haven't seen this happen before. My sense is that most people who quote Rethinks moral weights project are familiar with the limitations.  Can you say more on this? 

Rethink's weights unhedged in the wild: the most recent time I remember seeing this was when somebody pointed me towards this website: https://foodimpacts.org/, which uses Rethink's numbers to set the moral importance of different animals. They only link to where they got the weights in a tiny footnote on a secondary page about methods, and they don't mention any other ways that people try to calculate reference weights, or anything about what it means to "assume hedonism" or etc. Instead, we're told these weights are authoritative and scientific because they're "based on the most elaborate research to date".

IMO it would be cool to be able to swap between Rethink, versus squared neuron count or something, versus everything-is-100%. As is, they do let you edit the numbers yourself, and also give a checkbox that makes everything equal 100%. Which (perhaps unintentionally) is a pretty extreme framing of the discussion!! "Are shrimp 3% as important as a human life (30 shrimp = 1 person)! Or 100%? Or maybe you want to edit the numbers to something in-between?"

I think the foodimpacts calculator is a cool idea, and I don't begrudge anyone an attempt to make estimates using a bunch ... (read more)

7
Jason
<<My sense is that most people who quote Rethinks moral weights project is familiar with the limitations.>> Do you think that the people doing the quoting also fairly put the average Forum reader on notice of the limitations? That's a different thing than being aware of the limitations themselves. I'd have to go back and do a bunch of reading of past posts to have a firm sense on this.
1
Raph
I have yet to hear someone defend that. So far, everytime I have heard this idea, it was from a speciesist person who failed to understand the implication of rejecting speciesism. Basically just as a strawman argument.
4
Jackson Wagner
David Mathers makes a similar comment, and I respond, here.  Seems like there are multiple definitions of the word, and EA folks are using the narrower definition that's preferred by smart philosophers.  Wheras I had just picked up the word based on vibes, and assumed the definition by analogy to racism and sexism, which does indeed seem to be a common real-world usage of the term (eg, supported by top google results in dictionaries, wikipedia, etc).  It's unclear to me whether the original intended meaning of the word was closer to what modern smart philosophers prefer (and everybody else has been misinterpreting it since then), or closer to the definition preferred by activists and dictionaries (and it's since been somewhat "sanewashed" by philosophers), or if (as I suspect ) it was mushy and unclear from the very start -- invented by savvy people who maybe deliberately intended to link the two possible interpretations of the word.

Despite working in global health myself, I tend to moderately favor devoting additional funding to animal welfare vs. global health. There are two main reasons for this:

  1. Neglectedness: global health receives vastly more funding than animal welfare. 
  2. Importance: The level of suffering and cruelty that we inflict on non-human animals is simply unfathomable. 

    I think the countervailing reason to instead fund global health is:

  3.  Tractability: my sense is that, due in part to the far fewer resources that have gone into investigating animal welfare interventions and policy initiatives, it could be difficult to spend $100m in highly impactful ways. (Whereas in global health, there would be obviously good ways to use this funding.) That said, this perhaps just suggests that a substantial portion of additional funding should go towards research (e.g., creating fellowships to incentivize graduate students to work on animal welfare). 
4
Mo Putera
I shared your sense in #3 initially, but 2 things changed my mind: the fact that Open Phil has already granted ~$100M/yr in 2021 and 2022 (h/t MichaelStJules' comment for bringing this to my attention), and Megaprojects for animals, a longlist of "projects that further research might reveal would cost-effectively absorb $10M+/year", your idea re: funding research included, which seems to promise shovel-ready opportunities for scale-up beyond $100M/yr (let alone $100M granted over an arbitrary period of time, as the problem statement asks).

This is probably going to be downvoted to oblivion, but I feel it's worth stating anyway, if nothing else to express my frustration with and alienation from EA.

On a meta level, I somewhat worry that the degree to which the animal welfare choice is dominating the global health one kinda shows how seemingly out-of-touch many EAs have become from mainstream common sense morality views.

In particular, I'm reminded of that quote from the Analects of Confucius:

When the stables were burnt down, on returning from court Confucius said, "Was anyone hurt?" He did not ask about the horses.

You can counter with a lot of math that checks out and arguments that make logical sense, but the average person on the street is likely to view the idea that you could ever elevate the suffering of any number of chickens above that of even one human child to be abhorrent.

Maybe the EAs are still technically right and other people are just speciesist, but to me this does not bode well for the movement gaining traction or popular support.

Just wanted to get that out of my system.

6
Jason
What is the most effective and appropriate relationship with "mainstream common sense morality views" in your opinion? At one extreme, if we just parrot them, then we can just cut out the expensive meta middlemen and give directly to whatever mainstream opinion says we should. I do think the skew would be meaningfully different but for the significant discrepancy in GW vs AW funding in both EA and more generally.
6
Joseph_Chu
I don't know. Certainly just parroting them is wrong. I just think we should give some weight to majority opinion, as it represents an aggregate of many different human experiences that seem to have aligned together and found common ground. Also, a lot of my worry is not so much that EAs might be wrong, so much as that if our views diverge too strongly from popular opinion, we run the risk of things like negative media coverage ("oh look, those EA cultists are misanthropic too"), and we also are less likely to have successful outreach to people outside of the EA filter bubble. In particular, we already have a hard time with outreach in China, and this animal welfare emphasis is just going to further alienate them due to cultural differences, as you can probably tell from my Confucius quote. The Analects are taught in school in both China and Taiwan and are a significant influence in Asian societies. It's also partly a concern that groupthink dynamics might be at play within EA. I noticed that there are many more comments from the animal welfare crowd, and I fear that many of the global health people might be too intellectually intimidated to voice their views at this point, which would be bad for the debate.
9
CB🔸
The issue with majority opinion is that 500 years ago, the majority would have thought that most of what we do today is crazy.  I mean, even when I was 17, my opinion was close to the majority opinion (in my country), and I certainly wouldn't trust it today, because it was simply uninformed.  The risk of alienating other people is a valid concern. I'd be glad to see research to determine the threshold which would allow to maximise for both reach and impactful donations. Beyond what percentage of donations going to animal welfare will the movement get less traction ? 1% ? 90% ? Will people just not care about the raw numbers and maybe more about something else ?  For the groupthink point, I'm not sure if anything can be done. I'd be glad to read from people who think more donations should go to GHD (they can do it with an anonymous account as well). But your initial post got 21 karma, which makes it in the top 5 comments of the page, so I think there is potential for civil discussion here. 
3
Joseph_Chu
It's fair to point out that the majority has been wrong historically many times. I'm not saying this should be our final decision procedure and to lock in those values. But we need some kind of decision procedure for things, and I find when I'm uncertain, that "asking the audience" or democracy seem like a good way to use the "wisdom of crowds" effect to get a relatively good prior. I'm actually quite surprised by how quickly and how much that post has been upvoted. This definitely makes me update my priors positively about how receptive the forums are to contrarian viewpoints and civil debate. At least, I'm feeling less negativity than when I wrote that post.
3
Jason
One could also consider the general EA / EA-adjacent sentiment over time as a cross-check on the risk of current groupthink. Of course, later EAs could be responding to better evidence not available to earlier EAs. But I would also consider the possibility of changes in other factors (like perceived status, available funding for EAs, perceived lack of novel opportunities in a mature cause area that has strong interventions with near-limitless room for more funding) playing a major role.
3
CB🔸
Regarding the majority vote, I think "asking the audience" is not a good recipe when the audience is not very informed, which seems to be the case here (where would they get the information without much personal research?) I understand trusting the wisdom of the crowds in situations where people reasonably understand the situation (to take a classic example, guessing the weight of a pig). However, most people here likely have little information about all the different ways animals are suffering, the scale, research about sentience, knowledge about scope insensitivity, and arguments in favour of things like speciesm. Which makes sense! Not everybody is looking at it deeply.  But this doesn't provide a very good context for relying on the wisdom of the crowd.
6
Stijn
That seems like saying: "Let's not donate to animal charities because there are people who would donate to the most effective human charities but decide to donate to the less effective human charities when they see people who donate to the most effective human charities switch their donations to animal charities." Probably I'm not following the logic... Also: if donating to the top-effective animal charities is +100 times as cost-effective as donating to the top-effective human charities, that backfire effect (people donating to the less effective human charities instead of the top effective human charities) should be very strong: more than 100 people should show this backfire effect (i.e. remain non-EA) per effective altruist who donates to top-effective animal charities. That seems very unlikely to me. 
1
Kenneth_Diao
 I I think this is an interesting dilemma, and I am sympathetic to some extent (even as an animal rights activist). At the heart of your concern are 3 things: 1. Being too radical risks losing popular support 2. Being too radical risks being wrong and causing more harm than good 3. How do we decide what ethical system is right or preferable without resorting to power or arbitrariness? I think in this case, 2) is of lesser concern. It does seem like adults tend to give far more weight to humans than animals (a majority of a sample would save 1 human over 100 dogs), though interestingly children seem to be much less speciesist (Wilks et al., 2020). But I think we have good reasons to give substantial moral weight to animals. Given that animals have central nervous systems and nociceptors like we do, and given that we evolved from a long lineage of animals, we should assume that we inherited our ability to suffer from our evolutionary ancestors rather than uniquely developing it ourselves. Then there's evidence, such as (if I remember correctly) that animals will trade off material benefits for analgesics. And I believe the scientific consensus has consistently and overwhelmingly been that animals feel pain. Animals are also in the present and the harms are concrete, so animal rights is not beset by some of the concerns that, say, long-termist causes are. So I think the probability that we will be wrong about animal rights is negligible. I sympathize with the idea that being too radical risks losing support. I've definitely had that feeling myself in the past when I saw animal rights activists who preferred harder tactics, and I still have my disagreements with some of their tactics and ideas. But I've come to see the value in taking a bolder stance as well. From my experience (yes, on a college campus, but still), many people are surprisingly willing to engage with discussions about animal rights and about personally going vegan. Some are even thankful or later
0
quila
the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too. [note: i only swapped the order of humans/animals. my mind predicts that, at least without this text, this statement, but not the quoted one, would elicit negative reactions or be perceived as uncivil, despite the symmetry, because this kind of rhetoric is only normal/socially acceptable in the original case.] if giving epistemic weight to to popular morality (as you wrote you favor)[1], you'd still need to justify excluding from that the moralities of members of non-dominant species, otherwise you end up unjustly giving all that epistemic weight to whatever might-makes-right coalition takes over the planet / excludes others from 'the public' (such as by locking the outgroup in factory slaughter facilities, or extermination camps, or enslaving them), because only their dominant morality is being perceived. otherwise, said weight would be distributed in a way which is inclusive of animals (or nazi-targeted groups, or enslaved people, in the case of those aforementioned moral catastrophes).  this seems to characterize the split as: supporting humans comes from empathy, supporting animal minds comes from 'cold logic and math'. but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too. 1. ^ (to be clear, i don't agree, this is just a separate point)
5
Joseph_Chu
Yes, of course. My point isn't that they are right though. Chickens can't become EAs. Only humans can. My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering. Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal, abstract goal. To the extent that I think it should inform our priors, we cannot actually canvas the opinions of chickens or other species to get their moralities. We could infer it, but this would be us imagining what they would think, and speculative. I agree that ideally, if we could, we should also get those other preferences taken into consideration. I'm just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable. Yes, many feel empathy for animals, myself included. I should point out that I am not advocating for ignoring animal suffering. If it were up to me, I'd probably allocate the funds by splitting them evenly between global health and animal welfare, as a kind of diversified portfolio strategy of cause areas. I consider that the more principled way of handling the grave uncertainty that suffering estimates without clear confidence intervals entails to me. Note that even this would be a significant increase in relative allocation to animal welfare compared to the current situation.
4
quila
That's not the position I was responding to. Here is what you wrote: That seems like you're proposing actually giving epistemic weight to the beliefs of the public, not just { pretending to have the views of normal humans, possibly only during outreach }. My response is to that. From your current comment: Epistemic (and related terms you used, like priors) are about how you form beliefs about what is true. They are not about how you should act, so there cannot be an 'epistemic compromise with the human public' in the sense you wrote - that would instead be called, 'pretending to have beliefs closer to theirs, to persuade them to join our cause'. To say you meant the latter thing by 'epistemic weight' seems like a definitional retreat to me: changing the definition of some term to make it seem like one meant something different all along. (Some humans perform definitional retreats without knowing it, typically when their real position is not actually pinned down internally and they're coming up with arguments on the spot that are a compromise between some internal sentiment and what others appear to want them to believe. But in the intentional case, this would be dishonest.) There's not actually any impractical 'ideal-ness' to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms. (Restating your position as this also seems dishonest to me; you've displayed awareness of animals' preferences from the start, so you can't believe that it's intractable to consider them.)
3
Joseph_Chu
I do think we should establish our priors based on what other people think and teach us. This is how all humans normally learn anything that is outside their direct experience. A way to do this is to democratically canvas everyone to get their knowledge. That establishes our initial priors about things, given that people can be wrong, but many people are less likely to all be wrong about the same thing. False beliefs tend to be uncorrelated, while true beliefs align with some underlying reality and correlate more strongly. We can then modify our priors based on further evidence from things like direct experience or scientific experiments and analysis or whatever other sources you find informative. I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn't diverge as much. I'm not saying to lie about what we believe to recruit them. That would obviously fail as soon as they figured out what we actually believe, and is also dishonest and lacks integrity. And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you're wrong, or they're wrong, and we could all be wrong and the truth is some secret third thing. It's basic epistemic humility to agree that we all have working but probably wrong models of the world. And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent. I shouldn't have made it sound like I was suggesting compromising by deception. Calling things less than ideal and a compromise with reality was a mistake on my part. I think the most probable reason I worded it that way was that I felt that it
4
quila
Thank you for acknowledging that. Considering or trying on different arguments is good, but I'd suggest doing it explicitly. For example, instead of "I meant X, not Y" (unless that's true), "How about new-argument X?" is a totally valid thing to say, even if having (or appearing to have) pinned-down beliefs might be higher status or something.   Some object-level responses: This sounds like it's saying: "to make it easier to recruit others, our beliefs should genuinely be closer to theirs." I agree that would not entail lying about one's beliefs to the public, but I think that would require EAs lying to themselves[1] to make their beliefs genuinely closer to what's popular. For one's beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition. I don't think EAs should (somehow subtly) lie to themselves. If I imagine the EA which does this, it's actually really scary, in ways I find hard to articulate. Sure, there can be epistemic compromise in that other sense, where you know there's some probability of your reasoning being incorrect, or where you have no reason to expect yourself to be correct over someone who is as good at reasoning and also trying to form correct beliefs. But it's not something done because 'we need popular support to get things done'.  1. ^ this reminded me of this: If we can’t lie to others, we will lie to ourselves by Paul Christiano.
5
Joseph_Chu
Yeah, I should probably retract the "we need popular support to get things done" line of reasoning. I think lying to myself is probably, on reflection, something I do to avoid actually lying to others, as described in that link in the footnote. I kind of decide that a belief is "plausible" and then give it some conditional weight, a kind of "humour the idea and give it the benefit of the doubt". It's kind of a technicality thing that I do because I'm personally very against outright lying, so I've developed a kind of alternative way of fudging to avoid hurt feelings and such. This is likely related to the "spin" concept that I adopted from political debates. The idea of "spin" to me is to tell the truth from an angle that encourages a perception that is favourable to the argument I am trying to make. It's something of a habit, and most probably epistemically highly questionable and something I should stop doing. I think I also use these things to try to take an intentionally more optimistic outlook and be more positive in order to ensure best performance at tasks at hand. If you think you can succeed, you will try harder and often succeed where if you'd been pessimistic you'd have failed due to lack of resolve. This is an adaptive response, but it admittedly sacrifices some accuracy about the actual situation. Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?
4
quila
that is a form of evidence. if people's beliefs all had some truly-independent probability of being correct, then in a large society it would become extreme evidence for any belief that >50% of people have, but it's not actually true that people's beliefs are independent. human minds are similar, and human cultural environments are similar. often people's conclusions aren't actually independent, and often they're not actually conclusions but are unquestioned beliefs internalized from their environment (parents, peers, etc). often people make the same logical mistakes, because they are similar entities (humans). you still have to reason about that premise, "peoples conclusions about <subject> are independent", as you would any other belief. and there are known ways large groups of humans can internalize the same beliefs, with detectable signs like 'becoming angry when the idea is questioned'. (maybe usually humans will be right, because most beliefs are about low level mundane things like 'it will be day tomorrow'. but the cases where we'd like to have such a prior are exactly those non-mundane special cases where human consensus can easily be wrong.)
4
CB🔸
This answer feels like a very honest reflection on oneself, I like it.
1
Joseph_Chu
Oh, you edited your comment while I was writing my initial response to it. We can infer their preferences not to suffer, but we can't know what their "morality" is. I suspect chickens and most animals in general are very speciesist and probably selfish egoists who are partial to next-of-kin, but I don't pretend to know this. It's getting late in my time zone, and I'm getting sleepy, so I may not reply right away to future comments.
2
quila
Agreed, I mean that just for this subject of factory farming, it's tractable to know their preferences.

A couple of survey results which may be interesting in light of this debate:

  • When we surveyed the community on what portion of the community's resources they believed should be allocated to these two cause areas, the average allocation to GHD was higher. This was true among both low/high engagement EAs, though the gap was smaller for highly engaged EAs.
  • However, if we compare this to actual allocations (in 2019, since these were the most up to date we had at the time), we see that the average preferred allocations are higher for AW and lower for GHD.

This is in line with the debate week results showing a strong preference for an additional $100mn going to AW, but the continued preference for a larger total percentage going to GHD seems worth noting.

Some other factors not mentioned here but I sometimes think about:

-PETA used to do welfare campaigns and proudly own up their work on welfare campaigns when they talk about their history. But they stopped doing welfare campaigns around 10 years ago and even published public statements against some of the initiatives. I keep wondering whether that has anything to do with EA entering into space, refusing to fund PETA, and PETA withdrawing from welfare work to differentiate itself from welfare campaigning organisations in response. That would reduce cost-effectiveness of welfare campaigns significantly.

-One part I often see missing from human-animal comparisons is that animal welfare work prevents very extreme types suffering that would be classified as torture in human contexts. If I were to choose between extending a human life for 50 years versus preventing a person from suffering for one full year in a wire coffin, I would choose the latter. Similarly choosing between preventing 20.000 years of non-stop chicken torture vs. saving a human life is a lot different from saving the lives of 20.000 chickens versus saving the life of a human being. I think $5000 is currently able to fund... (read more)

I basically endorse this post, as well as the use of the tools created by Rethink Priorities that collectively point to quite strong but not overwhelming confidence in the marginal value of farmed animal welfare.

I'm a bit of a Benthamite "The question is not, 'Can they reason?' nor, 'Can they talk?' but rather, 'Can they suffer?'"

For any plausible (to me) guess about which non-human animals are capable of suffering, there are far far more non-human animals living in terrible conditions than humans in similarly bad conditions, and there just seems to be so many underfunded and underexplored ways we could help reduce that suffering. I've also seen some cost-effectiveness estimations that indicate you can help thousands of animals a lot for the same cost as helping one person a lot. ("a lot" being very vague!)

The only reason why I'm not at 100% agree is because helping humans become healthier might cause larger positive flow on effects, and this might add up to more impact in the long run. That's super tentative and could go either way - e.g. it seems possible that helping animals now could lead to our species being more ethical towards sentient beings in the long run too.  

  1. The basic case for chickens is very strong, even under views that are sceptical of small animals having a high chance/degree of sentience, because it's so easy to affect their lives cheaply compared to humans, and their lives seem v easy to improve by a lot
  2. $100m in total is not a huge amount (equiv to $5-10m/yr, against a background of ~$200m). I think concern about scaling spending is a bit of a red herring and this could probably be usefully absorbed just by current interventions

I don't think most animals are moral patients, and so see work on global health as much more valuable. This isn't as deeply a considered view as I'd like (though I think there's an unfortunate pattern where people who think animals are more likely to matter a lot are more likely to go into attempting to weigh the worth of animals) and people shouldn't put as much weight on this as my other EA-related views.

More in this direction: Weighing Animal Worth, Why I'm Not Vegan.

Can you expand on why you don't think most animals are moral patients?

Roughly, pleasure and suffering matter to the extent that there's an entity experiencing them. I think animals very likely don't have that kind of experience. I also think some humans don't, but I think the consequences of trying to draw distinctions among humans in this way would be pretty terrible and we shouldn't go in that direction. More: The Argument From Marginal Cases.

I would also be curious to hear more about why/if you are >~95% confident that pigs are not entities that experience suffering, while most humans are.[1]

Is it about the ability to have second-order beliefs, the ability to have complex language and certain kinds of social structures, or something else entirely?

  1. ^

    I think pigs are much more similar to humans than broiler chickens, so are a better species to examine the difference

I think animals very likely don't have that kind of experience

Why?

I think the cost-effectiveness of additional spending on animal welfare interventions is much higher than that on global health and development:

  • Buying organic instead of barn eggs, which is supposed to be a proxy for an animal welfare intervention with very low cost-effectiveness, is 2.11 times as cost-effective as GiveWell's top charities.
  • School Plates, which is a program aiming to increase the consumption of plant-based foods at schools and universities in the United Kingdom, is 60.2 times as cost-effective as GiveWell's top charities.
  • Corporate campaigns for chicken welfare are 1.51 k times as cost-effective as GiveWell's top charities.
  • Shrimp Welfare Project's Humane Slaughter Initiative is 43.5 k times as cost-effective as GiveWell's top charities.

I believe animals are much more neglected than humans. I calculated the annual philanthropic spending on farmed animals is 0.0514 % of that on animals plus humans, whereas I determined that the annual disability of farmed animals is 97.2 % that of animals plus humans.

You'd have to value animals at ~millionths of humans for scale and neglectedness not to be dispositive. Only countervailing considerations are things around cooperativeness, positive feedback loops, and civilizational stability, all of which are speculative and even sign uncertain

5
titotal
Can I ask how you arrived at the "millionths" number?
5
Mjreard
Not thinking very hard. I think it's more likely to be an overestimate of the necessary disparity than an underestimate.  There are about 500m humans in tractably dire straits, so if there were 500t animals in an equivalently bad situation, you might be very naïvely indifferent between intervening on one vs the other at a million to one. 500t is probably an oom too high if we're not counting insects and several ooms too low if we are.  I think the delta for helping animals (life of intense suffering -> non-existence) is probably higher (they are in a worse situation), tractability is lower, but neglectedness is way higher such that careful interventions might create compounding benefits in the future in a way I don't think is very likely in global health given how established the field is. 
  1. Causing unnecessary suffering is morally bad. Causing intense unnecessary suffering is morally worse.
  2. Non-humans have the capacity to physically and psychologically suffer. The intensity of suffering they can experience is non-negligible, and plausibly, not that far off from that of humans. Non-humans have a dispreference towards being in such states of agony.
  3. Non-human individuals are in constant and often intense states of agony in farmed settings. They also live short lives, sometimes less than 1/10th of their natural lifespan, which leads to loss of welfare they would have experienced if they were allowed to live till old age.
  4. The scale of farmed animal suffering is enormous beyond comprehension; if we only consider land animals, it is around 100 billion; if crustaceans and fish are included, the number is close to 1000 billion; if insects are accounted for, then the number is in several 1000s of billions. Nearly all of these animals have lives not worth living.
  5. The total dollar spent per unit of suffering experienced is arguably more than a thousand times lower for non-humans compared to humans. This seems unreasonable given the vast number of individuals who suffer in farmed sett
... (read more)

Animal suffering is larger-scale and more neglected. As explained in my post on 'Seeking Ripple Effects', I'm especially moved by the possibility of transformative innovations (e.g. economical lab-grown meat) improving human values at a key juncture in history, even though I think it's very unlikely.

OTOH, I'm a big fan of global health & development on more general and robust 'ripple effect' grounds, which is why I'm close to the center on this one.

I'm philosophically a longtermist, but suspect better evidenced short termist interventions are comparable to if not much greater than 'direct longtermism' in expectation. 

In the long run I think a thriving human descendant-line with better cooperation norms is going to lead to better total phenomenal states than reduced factory farming will.

9
JackM
At a risk of getting off topic from the core question, which interventions do you think are most effective in ensuring we thrive in the future with better cooperative norms? I don't think it's clear that this would be EA global health interventions. I would think boosting innovation and improving institutions are more effective. Also boosting economic growth would probably be better than so-called randomista interventions from a long-term perspective.
6
Arepo
I reviewed the piece you linked and fwiw strongly disagreed that the case it made was as clear cut as the authors conclude (in particular IIRC they observe a limited historical upside from RCT-backed interventions, but didn't seem to account for the far smaller amount of money that had been put into them; they also gave a number of priors that I didn't necessarily strongly disagree with, but seemed like they could be an order of magnitude off in either direction, and the end result was quite sensitive to these). That's not to say I think global health interventions are clearly better - just that I think the case is open (but also that, given the much smaller global investment in RCTs, there's probably more exploratory value in those). I could imagine any of the following turning out to be the best safeguard of the long term (and others):   * Health and development interventions * Economic growth work * Differential focus on interplanetary settlement * Preventing ecological collapse * AI safety work * e/acc (their principles taken seriously, not the memes) * AI capabilities work (because of e/acc) * Work on any subset of global catastrophes (including seemingly minor ones like Kessler syndrome, which in itself has the potential to destabilise civilisation) My best guess is the last one, but I'm wary of any blanket dismissal of any subset of the above.
3
JackM
What is the argument for Health and development interventions being best from a long-term perspective?  I think animal welfare work is underrated from a long-term perspective. There is a risk that we lock-in values that don't give adequate consideration to non-human sentience which could enable mass suffering to persist for a very long time. E.g. we spread to the stars while factory farming is still widespread and so end up spreading factory farming too. Or we create digital sentience while we still don't really care about non-human sentience and so end up creating vast amounts of digital suffering. I think working to end factory farming is one way to widen the moral circle and prevent these moral catastrophes from occurring.   
5
Arepo
Fwiw I don't disagree that , and should have put it on my list. I would nonetheless guess it's lower EV than global health.  That's a pretty large question, since I have to defend it against all alternatives (and per my previous comment I would guess some subset of GCR risk reduction work is better overall) But some views that make me think it could at least be competitive: * I am highly sceptical of both the historical track record and, relatedly, the incentives/(lack of) feedback loops in longtermist-focused work in improving the far future * I find the classic 'beware surprising convergence' class of argument for why we should try to optimise directly for longtermism is unconvincing theoretically, since it ignores the greater chance of finding the best longtermist-affecting neartermist intervention due to the tighter neartermist feedback loops * I think per my discussion here that prioritising events according to their probability of wiping out the last human is a potentially major miscalculation of long term expectation * the main mechanism you describe having longtermist value is somewhat present in GHD (expanding moral circle) * It just being much less controversial (and relatedly, less-based on somewhat subjective moral weight judgements) means it's an easier norm to spread - so while it might not expand the moral circle as much in the long term, it probably expands it faster in the short term (and we can always switch to something more ambitious when the low hanging moral-circle-fruit are picked) * related to lack of controversy, it is much more amenable to empirical study than either longtermist or animal welfare work (the latter having active antagonists who try to hide information and prevent key interventions) * I find the economic arguments for animal welfare moral circle expansion naturally coming from in vitro meat compelling. I don't think historical examples of sort-of-related things not happening are a strong counterargument. I don't
5
CB🔸
My understanding is that Founder's Pledge (I think it was them) tried to look at impactful donation opportunities to boost economic growth and didn't find anything that had a good evidence base and that was neglected. So I'm a bit skeptical on that. Even then, it seems unlikely that more economic growth will lead to better treatment of animals. Right now, countries getting richer is strongly correlated with more factory farming. Innovation and improvements in AI are currently used by companies to increase density in farms. We can make a point that more research will automatically lead to alternative proteins replacing everything but it's very speculative.

99% yes for me.
This is like 50% of the yearly global budget for farmed animals. A lot can be done with this money, and it's not too outrageous an amount that it wouldn't be absorbed efficiently. Speciecism aside, the bang for these bucks could be incredible. 
Moreover, if among the spillover effects of this was lower consumption of animal products, this would be an additional win for public health (at least in countries where too much animal products are eaten). 

Animal welfare has much higher EV even under conservative assumptions. IMO only plausible argument against is that the evidence base for animal welfare interventions is much worse, so if you are very skeptical of unproven interventions, you might vote the other way. But you'd have to be very skeptical.

5
Ben Millwood🔸
I think of GiveWell as being pretty skeptical of the average global health intervention. Curious if you agree, and if you have a sense of how that level of skepticism would play out on animal welfare interventions.
7
MichaelDickens
I get the sense that GiveWell would not recommend any animal welfare intervention (nor would they recommend any x-risk or policy intervention). But I don't think that's because they think any intervention that doesn't meet their standards isn't worth funding—they fund a lot of more speculative interventions thru Open Philanthropy. I think GiveWell wants to be viewed as a reliable source for high-quality charities, so they don't want to recommend more speculative charities even if the all-things-considered EV is good. (I'm just speculating here.)

There is the meat eater problem where more animal lives would likely be lost by increasing the human population. It also seems much more cost effective per dollar to suffering spared to help animals and factory farming is spreading rapidly through Asia and Africa, making this a hingey time.

Most serious EA analysis I've seen seems to conclude helping animals is much more effective (i.e. Rethink Priorities work for example), so that's the view I currently weakly hold. Also, helping humans harms animals via the meat eater problem, reducing its value on net, but there is no large effect the other way. Very open to changing my mind.

3
alphaplus
I like your opinion. I previously thought that spending on GH had no negative effect on AW, but I updated my thinking. Also, I think spending on GH can have a positive indirect effect on AW. Talented individuals who would have died otherwise could be saved. These individuals might then contribute to technology advancements. and this marginal productivity could have a positive effect on AW in the long run.  However, the scale of this effect is uncertain. I have neither evidence nor instinct about this.

There's not much to add beyond what everyone else has said. I think we would need to be exceedingly confident in particular views about sentience and moral patienthood and capacity for suffering for non-humans to think GHD was better. I very much wish I had written down more of my reasoning from years ago when I was mainly donating to GiveWell, I think I just hadn't thought it over much!

2
CB🔸
Same here ! I've rarely seen resources making an inter-cause area comparison. I also donated a lot to the AMF, but without thinking much about it. 

On a purely ideological basis, I would have placed myself as a "strong agree". However, on a more practical level, I am concerned that the most popular animal welfare interventions (specifically corporate campaigns) may have a risk of actually having a negative impact on animal welfare. For example, if corporation X signs a promise to switch to higher welfare standards, its comms/PR around this switch might be so effective that an individual who could otherwise have been convinced to reduce their meat consumption on animal welfare grounds (or even go vegan, the best possible outcome), actually feels satisfied that their choice to continue consuming meat from corporation X is ethical and therefore continues to consume meat at the same or even greater rate. Maybe this is baseless speculation, but intuitively, this feels like a real risk which hasn't been explored enough. 

Even though the expected value of corporate campaign work is high, I feel instinctively very uncomfortable donating money to an intervention that has what I worry is a real chance of actually making the issue worse. This might just reflect my personal low appetite for risk.

By contrast, I can't think of an equivalent problem for popular GHD interventions - the worst outcome in this context appears to be that money is donated to an intervention that, in reality, isn't as effective as assumed, and the money could therefore have been better spent elsewhere.

As a result of all of this, I have bumped my response down to only a "slightly agree" rather than a "strong agree".

I see no legitimate justification for attitudes that would consider humans as important enough that global health interventions would beat out animal welfare, particularly given the sheer number and scale of invertebrate suffering. If invertibrates are sentient, it seem animal welfare definitely could absorb 100m and remain effective on the margin, and probably also if they are not (which seems unlikely). The reasons I am not fully in favour is mostly because the interaction of animal welfare with population ethics is far stronger than the interaction of global health developments, and given the signifciant uncertainties involved with population ethics, I can't be sure these don't at least significant reduce the benefits of AW over GH work

Large scale animal funding is in a worse state compared to global health. https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps 

I don't believe in complete impartiality. I think we have a stronger moral obligation to those who are closer to us--be it family, friends, or co-nationals. The vast majority of my donations have gone to global health simply because it is much much more cost-effective to help the poorest in the world. 
 

I also think that a blind push to expand the moral circle is misguided. See: https://gwern.net/narrowing-circle.

3
CB🔸
I'm not sure I understand: on one side, we have a stronger obligation to those close to us, but on another side, it is good to help strangers that are thousands of kilometers away? I'm also not sure why you draw the line at animals. I personally think that it is good to help strangers thousands of kilometers away - and it is good that you do so (congrats, by the way!). I also understand that helping our family or friends is important - which is why I help them too. The argument often put forward is not that you shouldn't help people in your country, but that it's much more tractable to help people in poor countries. You can help more people for the same amount of resources. The same goes for animals. I also read sections of your link and skimmed through the rest, but I don't see any justification that relates to the idea that helping animals is misguided. It says that moral values can regress/progress, and this depends on the physical and cultural context (which is true). The conclusion is that we shouldn't just expect moral values to change automatically - but that just means that we should devote our efforts to actions that don't rely on this assumption. For instance, supporting alternative proteins that are cheaper and tastier can reduce the cost of acting morally. 
7
Ben Millwood🔸
The argument of the link is that moral progress has sometimes meant correctly regarding some previous moral concern as unnecessary or based on false belief. I think the relevance here is to resist the idea that moral concern for animals must be correct by a "more moral concern is always better" heuristic. (I think it's a useful argument to have in mind, but I think we have much better reasons to be morally concerned about animals.)
3
CB🔸
Okay, I see. In that case, I tend to agree with your (Ben's) position on that topic.
1
Tim Hua
  I don't see how this is contradictory? For example, you might prefer saving 10 American lives to saving 11 non-American lives, but prefer saving 100 non-American lives to 5 American lives. That and the anti-expanding moral circle argument suggests that it's OK (and in fact, in my opinion, good) to assign different weights to different entities.
1
CB🔸
Oh, ok, I see.  But in that case, if scale is a very important metric, shouldn't helping animals also be a good idea? It's possible to help thousands of them for a fraction of the cost required to save one non-Amerian life.

I mostly agree with Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by Ariel Simnegar 🔸, as some others have already referenced. My animal moral weights are probably close to RP's, and so higher than Open Phil's.

Open Phil spent around $100M on animal welfare in each of 2021 and 2022.

An extra $100M for animal welfare would be best spread across multiple years, given organizational constraints to scaling. I'd mostly have in mind outreach/campaigns/lobbying targeting corporations, certifiers, institutions and governments, and ballot initiatives for animal welfare policy change.

There might be more direct ways to purchase animal welfare that would scale and still beat global health, but we could probably do much better with higher leverage policy interventions.

Some cost-effectiveness analyses here, here and here.

I don't really know... I'm suspect some kind of first-order utility calculus which tallies up the number of agents which are helped per dollar weighted according to what species they are makes animal welfare look better by large degree. But in terms of getting the world closer on the path of the "good trajectory", for some reason the idea of eliminating serious preventable diseases in humans feels like a more obvious next step along that path?

I think the money goes a lot further when it comes to helping non human animals then when it comes to helping humans.

I am generally pretty bought into the idea that non human animals also experience pleasure/suffering and I care about helping them.

I think it is probably good for the long term trajectory of society to have better norms around the casual cruelty and torture inflicted on non-human animals.

On the other hand, I do think there are really good arguments for human to human compassion and the elimination of extreme poverty. I am very in favor of th... (read more)

In general, I agree with the position that investing an additional $100m into animal welfare opportunities would be more impactful than global health opportunities even under views that use moral weights on the lower end of the scale for nonhumans, and potentially way more impactful if we use moral weights that grant nonhumans greater capacity for sentience (which I think we probably should).

In short—I think the scale of animal suffering is much larger (even when only considering animal agriculture, and not wild animal welfare); animal ... (read more)

Laura Duffy's analyses of this comes close to my view. On the margin, the question between global health charity and animal charity is something like GiveWell top charities *e.g. AMF) vs. ACE top charity (e.g. The Humane League), which is something like "Would you rather save 1 DALY or 40 years of hens from cages to cage-free.

I'm pretty split between the two and my donation habits reflect this; however, I don't think we know how to scale effective animal interventions past the current funding gaps in the low $10ms. For Global health, we do.

Edit: Learned th... (read more)

3
CB🔸
According to the welfare footprint project, going from cages to cage-free removes a large part of the pain laying hens have to go through : Transition to cage-free systems – Welfare Footprint Project "Overall, an average of at least 275 hours of disabling pain, 2,313 hours of hurtful pain and 4,645 hours of annoying pain are prevented [over 60 to 80 weeks] for each hen kept in an aviary instead of CC during her laying life"  Over 40 years of life of several hens, this would be over 7,000 hours of disabling pain and 62,000 hours of hurtful pain removed.   It feels like, comparatively, adding one year of life to someone is much less impressive, even assuming humans have the ability to suffer more?
1
JDBauman
Thanks for this. I wonder roughly how many hours of "disabling pain" or "hurtful pain" we estimate are diverted by saving 1 DALY. That would help me get a better sense of the tradeoff. Anyone have a better sense? @NickLaing ?
5
NickLaing
I'm afraid I can't help here. DALYs in global health use discount measures which lie between zero and one. Chronic back pain for example has a disability weight of 0.1, (for simplicity assumes the life of the average person with chronic back pain is 10% worse than someone with no health issues)  I could be missing something but I don't think that we calculate DALY equivalents for excruciating pain for things like torture, which couldn't just be calculated through discounting as they cause net negative living time. that's why they have to be newly estimated for animals in this situation.
3
Ariel Simnegar 🔸
I agree with CB's reply. It also may be worth mentioning the footnote from the debate question that the $100m can be spent over any amount of time we wish. So if we add (say) $10m per year over the next 10 years, it doesn't seem like this marginal $100m would be substantially less cost-effective than what would otherwise be spent over the next 10 years.

Humans kill about 1 trillion animals every year. https://sentientmedia.org/how-many-animals-are-killed-for-food-every-day/#:~:text=Chickens:%20206%20million/day,existed%20is%20just%20117%20billion. Many of them lead harsh, painful lives in factory farms and/or die a brutal death. And this doesn't even touch on wild animals suffering from non-human causes. 

To contrast, there are only 8 billion humans on Earth. 

8 billion is less than 1 trillion.

In this post last year, I describe why I think animal welfare is 100-1000x better than global health on the margin. In this post, I describe why I still think that, and give some responses to objections I didn't discuss in the previous post.

Two important considerations to strongly favor animal welfare 

  1. Saving a human life is likely net negative due to increased meat consumption and animal suffering. According to a survey, most people believe the welfare of a farmed chicken is negative and equal in size to the positive welfare of a human. Also most people believe the welfare of birds count almost as much as the welfare of humans (they give animal welfare relative to human welfare an 8 on a scale from 0 to 10). But there are more farmed chickens than humans on earth (3 chickens per human),
... (read more)
  • AW seems clearly more neglected in terms of funding inside the effective giving space and on net.
  • There is plenty of room for funding in the AW space, I would be surprised if $100M couldn't  be spent down over the next 10 years in the AW space at > 50% of the cost-effectiveness of the current marginal dollar (on average).
  • Most of my uncertainty comes from some credence that human lives are vastly more important than non-human animal lives, or medium-term growth being accelerated by GH interventions which could make GH work much more leveraged.

I'll try to write a longer comment later, but right now I'm uncertain but lean towards global health because of some combination of the following:
1. I suspect negative lives are either rare or nonexistent, which makes it harder to avoid logic-of-the-larder-type arguments

2. I'm more uncertain about this, but I lean towards non-hedonic forms of consequentialism (RP parliament tool confirms that this generally lowers returns to animals)

3. Mostly based on the above, I think many moral weights for animals are too high

I'm also not sure if the 100 million would g... (read more)

5
Ariel Simnegar 🔸
To your first point, it seems that animal welfare interventions which fix population size, like humane slaughter, would be orders of magnitude better than global health interventions, even if the animals live net good lives. For another example, the Fish Welfare Initiative's interventions to improve fish lives may increase the number of farmed fish due to increasing capacity for stocking density, so that charity could also seem exceptionally good by the logic of the larder.
5
CB🔸
Interesting, I'd be curious to know why you think factory farmed animals have positive lives. If true, this would have huge implications.
4
MichaelStJules
I think animals could still matter a lot (or the interpersonal comparisons are undefined) on non-hedonic welfarist views: 1. On objective list theories, see Theories of Welfare and Welfare Range Estimates by Bob Fischer. 2. On preference- and desire-based theories, see my posts Which animals realize which types of subjective welfare? and Solution to the two envelopes problem for moral weights. Some more background is in Types of subjective welfare.
  • I am directionally sympathetic to the arguments in this post.
  • Good Ventures dropping out of some AW focused areas makes me think there might currently be promising gaps to fill (although it sounds like immediate gaps in some shrimp + wild animal orgs might be covered?)
  • I'm not marking myself as higher mainly because I don't have good taste on how good the marginal AW focused funding proposals are right now, vs the marginal GH focused ones.

I weigh moral worth by degree of sentience based on neuron count as a rough proxy, which naturally tends to weigh helping an equal number of humans more than an equivalent number of any other currently known species.

But the evidence I've seen suggests you could help far more of almost any kind of animals (e.g., chickens) avoid suffering for the same amount of money.

Thanks for your justification! Hamish McDoodles also believed that neuron count weighting would make the best human welfare charities better than the best animal welfare charities. However, after doing a BOTEC of cage-free campaign cost-effectiveness using neuron counts as a proxy, he eventually ended up changing his mind:

ok, animal charities still come out an order of magnitude ahead of human charities given the cage-free campaigns analysis and neuron counts

So unless you have further disagreements with his analysis, using neuron count weighting would probably mean you should support allocating the 100M to animal welfare rather than global health.

Thank you for justifying your vote for global health!

One counterargument to your position is that, with the same amount of money, one can help significantly more non-human animals than humans. Check out this post. An estimated 1.1. billion chickens are helped by broiler and cage-free campaigns in a given year. Each dollar can help an estimated 64 chickens to a total of 41 chicken-years of life.

This contrasts to needing $5,000 to save a human life through top-ranked GiveWell charities.

1
Joseph_Chu
So, the $5,000 to save a human life actually saves more than one human life. The world fertility rate is currently 2.27 per woman, but expected to decline to 1.8 by 2050 and 1.6 by 2100. Lets assume this trend continues at a rate of -0.2 per 50 years until eventually it reaches zero at 2500. Since it takes two people to have children, we halve these numbers to get an estimate of how many human descendents to expect from a given saved human life each generation.  If each generation is ~25 years, then the numbers will follow a series like 1.135 + 0.9 + 0.85 + 0.8 ... which works out to 9.685 human lives per $5000, or $516.26 per human life. Human life expectancy is increasing, but for simplicity lets assume 70 years per human life. 70 / $516.26 = 0.13559 human life years per dollar. So, if we weigh chickens equally with humans, this favours the chickens still. However, we can add the neuron count proxy to weigh these. Humans have approximately 86 trillion neurons, while chickens have 220 million. That's a ratio of 390. 0.13559 x 390 = 52.88 human neuron weighted life years per dollar. This is slightly more than 41 chicken life years per dollar. Which, given my many, many simplifying assumptions, would mean that global health is still (slightly) more cost effective.
3
JackM
You haven't factored in the impact of saving a life on fertility. Check out this literature review which concludes the following (bold emphasis mine): Also you're assuming neuron count should be used as proxies for moral weight but I'm highly skeptical that is fair (see this).
7
Joseph_Chu
To respond to the comments so far in general, I'd say that my priors are that almost all lives, even highly unpleasant ones, are worth living, and that I tend to weigh moments of happiness much more than equivalent moments of suffering, as this avoids what I see as philosophically problematic implications such as suicide for chronically depressed people, or nuking the rainforest as a net positive intervention. Given these biases, I tend to weigh much more heavily interventions like bednets that save lives that would otherwise not be lived, over things that only improve lives like most animal welfare interventions. Furthermore, at least some of the lives that are saved will have offspring and so the net impact of saving a life is actually much higher than just one life, but includes all potential descendents. I do think animal welfare is important and that, all other things being equal, happier chickens is better than just barely life worth living chickens, but I consider the magnitude of this impact to be less than saving countless lives.
9
quila
do you mean that you chose this position because it avoids those conclusions? if so: 1. then the process you used was to select some (of many possible) moral axioms which lead to the conclusion you like. * i don't think that would mean the axiom is your true value. * but if choosing axioms, you could instead just follow the conclusions you like, using an axiom such as "my morality is just complex [because it's godshatter]". 2. separately, the axiom you chose introduced a new 'problematic' conclusion: that someone in a mechanized torture chamber, who will be there for two more years, (during which their emotional state will mostly only change between depression and physical-harm-induced agony - maybe there will also be occasional happiness, like if another animal tries to comfort them), and then die without experiencing anything else - should be kept alive (or be created) in that situation instead of ceased to exist (or not be created), when these are the only choices. * that's definitely something the universe allows one to prefer, as all moral preferences are. i'm just pointing it out because i think maybe it will feel immoral to you too, and you said you chose axioms to avoid problematic or immoral-feeling things. * in case it doesn't feel wrong/'philosophically problematic' now, would it have before, before you started using this axiom, and so before your moral intuitions crystallized around it? as i am a moral anti-realist, i cannot argue against a statement of what one values. but on priors about humans, i am not sure if you would actually want the world to be arranged in a way which follows this value, if you fully understood what it entails. have you spent time imagining, or experiencing, what it is like to live a life of extreme suffering? what it is like for it to be so bad that you desperately prefer nonexistence to it? now, such lives could still be considered 'worth it' overall if they eventually get better or otherwise are conside
3
Joseph_Chu
As someone who has experienced severe depression and suicidal ideation, I do have at least some understanding of what it entails. It's my own experience that biases me in the way I described. Admittedly, my life has gotten better since then, so it's not the same thing as a life of just extreme suffering though.
5
JackM
What do you think about people who do go through with suicide? These people clearly thought their suffering outweighed any happiness they experienced.
1
Joseph_Chu
I feel for them. I understand they made a decision in terrible pain, and can sympathize. To me it's a tragedy. But I, on an intellectual level think they made an very unfortunate mistake, made in a reasonable ignorance of complex truths that most people can't be expected to know. And I admit I'm not certain I'm right about this either.
2
quila
can you explain how? i believe extreme suffering had the opposite effect on me, making me become a suffering-focused altruist. i don't actually understand how it could make someone ~not disvalue suffering. (related: 'small and vulnerable'). (i mean, i have guesses about how that could happen: like, maybe ~not disvaluing it was the only way to mentally cope with the vast scale of it. living in a world one believes to be evil is hard; easier to not believe it's evil, somehow; have heard this is a reason many new animal-suffering-boycotters find it hard to continue having an animal-caring worldview. or, maybe experiencing that level of suffering caused a buddhist enlightenment like thing where you realized suffering isn't real, or something. though, happiness wouldn't be real either in that case. i'm actually adjacent to this view, but it sure feels real for the animals, and i would still like to make the world be good for those who believe in it.) from your other comment: it still feels mysterious / that comment seems more like 'what you prefer and uncertainty' than 'why / what caused you to have those preferences'
3
Joseph_Chu
I guess, going through extensive suffering made me cherish the moments of relative happiness all the more, and my struggle to justify my continued existence led me to place value in existence itself, a kind of "life-affirming" view as a way to keep on going. There were times during my suicidal ideation that I thought that the world might be better off without me, for instance that if I died, they could use my organs to be transplanted and save more lives than I could save by living, that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live. To counter these ideas, I developed a nexus of other ideas about the meaning of life being about more than just happiness or lack thereof, that truth was also intrinsically important, that existence itself had some apparent value over non-existence.
6
quila
i see, thanks for explaining! i'm modelling this as: basic drive to not die -> selects values that are compatible with basic drive's fulfillment. i've been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)  in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you'll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
1
Joseph_Chu
Sorry for the delayed response. This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded. These words are very kind. Thank you.
2
Joseph_Chu
I should also add, a part of why I consider the conclusions reached by a moral theory not aligning with my moral intuitions important, is that in psychology there are studies that show that for complex problems, intuition outperforms logical reasoning at getting the correct answer, so ensuring that the theory's results are intuitive is in a sense, a check on validity. If that's not satisfactory, I can also offer two first principles based variants of Utilitarianism and hedonism that draw conclusions more similar to mine, namely Positive Utilitarianism and Creativism. Admittedly, these are just some ideas I had one day, and not something anyone else to my knowledge has advocated, but I offer them because they suggest to me that in the space of possible moralities, not all of them are so suffering focused. I'm admittedly uncertain about how much to endorse such ideas, so I don't try to spread them. Speaking of uncertainty, another possible justification for my position may well be uncertainty about the correct moral theory, and putting some credence on things like Deontology and Virtue Ethics, the former of which in Kantian form tends to care primarily about humans capable of reason, and the latter contains the virtue of loyalty, which may imply a kind of speciesism in favour of humans first, or a hierarchy of moral circles. There's the concept of a moral parliament that's been discussed before. To simplify the decision procedure, I'd consider applying the principle of maximum entropy, aka the principle of indifference, that places an equal, uniform weight on each moral theory. If, we have three votes, one for Utilitarianism, one for Deontology, and one for Virtue Ethics, two out of the three (a majority) seem to advocate a degree of human-centrism. I've also considered the thought experiment of whether I would be loyal to humanity, or betray humanity to a supposedly benevolent alien civilization. Even if assume the aliens were perfect Utilitarians, I would be hes
3
Ben Millwood🔸
Huh? Even if you weigh moments of happiness much more, that doesn't always support maximising the number of lives. To use a somewhat farcical model that I hope is nevertheless illustrative, wouldn't you prefer to add two moments of happiness to someone's life than to create a new life that only experienced one moment of happiness? If so, I don't see why you'd conclude that bednets are better than welfare reforms under these assumptions.
3
Joseph_Chu
I guess my unstated assumption is that if the lives of the chickens are already worth living, then increasing their welfare further will quickly run into the diminishing returns due to the law of diminishing marginal utility. Conversely, adding more lives linearly increases happiness, again, assuming that each life has at least a baseline level of happiness that makes the life worth living.
6
MichaelStJules
What do you think of RP's work (mostly) against using neuron counts? From the summary: (Also this more specific hypothesis.)
1
Joseph_Chu
I use neuron counts as a very rough proxy for the information processing complexity of a given organism. I do make some assumptions, like that more sophisticated information processing enables more complex emotional states, things like memory, which compounds suffering across time, and so on. It makes sense to me that sentience is probably on some kind of continuum, rather than an arbitrary threshold. I place things like photo-diodes on the bottom of this continuum and highly sophisticated minds like humans near the top, but I'll admit I don't have accurate numbers for a "sentience rating". I hold my views on neuron counts being an acceptable proxy mostly because of what I learned from studying Cognitive Science in undergrad and then doing a Master's Thesis on Neural Networks. This doesn't make me an expert, but it means I formed my own opinions and disagree with the RP post somewhat. I have not had the time to formulate substantive objections in a rebuttal however. Most of my posts on these forums are relatively low-effort.

I'm pretty confident (~80-90%?) this is true, for reasons well summarized here.

I'm interested in thoughts on the OOM difference between animal welfare vs GHD (i.e. would $100m to animal welfare be 2x better than GHD, or 2000x?)

I'd be surprised if there isn't something in the order of at least a 100x to 1000x difference in cost-effectiveness in favour of animal interventions (as suggested here).

Animals are much more numerous, neglected, and have terrible living conditions, so there's simply much more to do. According to FarmKind, $100 donated to the Impact Fund can protect 124 chickens 🐥 from suffering, as well as 61 pigs 🐷, a cow 🐮, 22 fish 🐟, and more than 25 000 shrimps, 𝘢𝘭𝘭 𝘢𝘵 𝘵𝘩𝘦 𝘴𝘢𝘮𝘦 𝘵𝘪𝘮𝘦. Plus, it offsets ~6.7 tonnes of CO2 🌎. These kinds of ... (read more)

5
David Mathers🔸
"Right now, if humans and farmed animals are considered together, total global welfare may be declining at increasing speed, and could already be well below zero."  Given that there are way more wild animals than farmed animals, this is probably determined by whether wild animal lives are net negative, and how much humans are reducing their population overall, right? 
4
Ben Millwood🔸
This is surely true by number but I'm not sure it would be true on all reasonable weightings? See e.g. https://en.wikipedia.org/wiki/Biomass_(ecology)#/media/File:Terrestrial_biomass.jpg
1
CB🔸
I'm not sure this metric is relevant : biomass weight is massively dominated by the largest mammals just for the reason that they are big. Going by this metric, it would mean that having one super obese 400kg individual, or one small cow, counts the same as having 100 human babies (not to talk about elephants). I think number of individuals is much more relevant here. And there just happens to be a lot of smaller individuals.
2
Ben Millwood🔸
Yeah, I didn't intend to suggest that biomass is actually the metric, but more like, if you believe that the "intensity of experience" ratio is at least as large as the mass ratio (not because of the mass, but because the larger creatures tend to also have more complex brains and behaviour and so on), then actually farmed animals may have at least comparable if not more "total experience" than wild animals.
1
CB🔸
Oh, as a proxy of that. I don't think I agree since I am not convinced that neural count is the relevant metric but I understand better the use of this proxy.
3
CB🔸
Great point - it was flagged in the linked post, but I forgot to explicit that. Regarding wild animals, it is so hard to estimate whether their lives are overall net negative (or positive) and to what extent, that I forgot to precise this huge caveat here.  We still don't have good enough data, and there are large uncertainties (e.g. what is the impact of climate change if it makes siberia more habitable?) But this could indeed change the overall sign of the impact of humanity (and there are some futures where we take better care of wild animals - which would be great). But yeah, more solid data is needed on that topic. 

I'm right on the fence here, because although animal welfare is severely neglected and there's a lot of important tractable work on it that could use more money, I'm somewhat unconvinced about the value of animals compared to humans, as I'm not a hedonist and think that human experience might well be richer and more intense than animals by many orders of magnitude.

5
Ariel Simnegar 🔸
What do you think about Bob Fischer's argument that even if one is not a hedonist, pleasure/pain should still be a big enough part of welfare (say at least 10%) such that one shouldn't discount Rethink's moral weights by more than an order of magnitude or so?
6
NickLaing
I agree with that consent, and i'm actually about to do an entire post on this. I think the hedonist element is only one important animal favoring juncture - see what you think of the post tomorrow!
2
Ariel Simnegar 🔸
Nice; looking forward to it!
2
MichaelStJules
My sequence might also be helpful. I didn't come up with too many directly useful estimates, but I looked into implications of desire-based and preference-based theories for moral weights and prioritization, and I would probably still prioritize nonhuman animals on such views. I guess most importantly: 1. For endorsed/reflective/cognitive/belief-like desires or preferences, like life satisfaction and responses to hypotheticals like QALY tradeoff questions, I'm pretty skeptical of interpersonal utility comparisons in general, even between humans. I'm somewhat skeptical of comparisons for hedonic states between different species. I'm sympathetic to comparisons for "felt desires" across species, based on how attention is affected (motivational salience) and "how much attention" different beings have.[1] (More here, partly in footnotes) 2. Perhaps surprisingly and controversially, I suspect many animals have simple versions of endorsed/reflective/cognitive/belief-like desires or preferences. It's not obvious they matter (much) less for being simpler, but this could go either way. (More here and here) 3. Humans plausibly have many more preferences and desires, and about many more things than other animals, but this doesn't clearly dramatically favour humans. 1. If we measure the intensity of preferences and desires by their effects on attention, then the number of them doesn't really seem to matter. Often our preferences and desires are dominated by a few broad terminal ones, like spending time with and the welfare of loved ones, being happy and free from suffering, career aspirations. 2. I'm not aware of particularly plausible/attractive ways to ground interpersonal comparisons otherwise. 3. Normalization approaches not grounding interpersonal comparisons don't usually even favour humans at all, but some specific ones might. 4. Uncertainty about moral weights favours nonhumans, because we understand and value things by reference to our own experiences,

I support lab grown meat research / production, other interventions seem useless. I support "global health" more broadly and strongly, you have less ways to burn money in ways i find useless

4
CB🔸
Can you expand on why other animal interventions seem useless? For instance, developing plant-based alternatives, getting chickens out of cramped cages, stunning animals before slaughter...  I'm not sure to see how these interventions do not improve the lives of other beings?
3
North And
>developing plant-based alternatives This too can be useful, but less so. My model here is there would be transition to lab grown meat, and moving this transition few years / months / days into the earlier time is the thing that matters most And also in general, I have really cautious stance on population ethics with respect to animals. And i think most utilitarian approaches handle it by not handling it, just refusing to think about it. And that's really weird. Like, if i donate to animals welfare of chickens? I bet the beneficiaries is next generation of chickens from the one currently existing. I want to donate in such a way as to prevent their existence, not supply them with band aids. I think causing creation of 20% less tortured chicken instead is like insane goal for my donation.   
1
CB🔸
Very interesting.  From what I've seen, lab grown meat (or rather cellular meat) will face significant challenges before it can replace meat at a large scale (regulatory, technical, opposition from the industry). I think it's still worth investing into, but even it does work, it will take a long time before getting becoming large scale (unless a super AI solves that for us). Some other alternative proteins might be more promising - such as single cell protein. While certainly worth donating to, I think other venues are necessary, such as improving the conditions of animals in the decades before alternatives replace everything (hopefully).  Moreover, alternative proteins can't solve everything by themselves. Maybe fish or something else will be super hard to replace. In that case, other venues that help heving people care more about the topic is important - this includes corporate campaigns that shift the overton window, legal campaigns, research into wild animal suffering (population ethics is a tricky one here)... If cellular meat takes 30 years to take hold, reducing by half the suffering of millions of beings in the meantime is still pretty incredible. 

The vast majority of sentient beings are non-human animals, and the problem of animal suffering is far more neglected compared to global health. I think it's also worth noting that we are probably quite biased against taking animal suffering as seriously as we should (we live in an extremely speciesist culture, we belong to the human species, animal suffering tends to be hidden/out of sight, etc). 

I think animals are generally more efficient/effective as a way of converting money into short-term (the next 50 years) well-being.

My impression is that the mean global health intervention does not significantly improve the long-term future. However, I could definitely be convinced otherwise, and that would get me to change my answer.

All that said, if one is focused on improving the long-term future, it seems suspicious to focus on global health, as opposed to other interventions that are clearly more focused on that. 

I think "EA orthodoxy" pretty strongly supports this conclusion, so the main question is whether the orthodoxy is trustworthy on this question. One possible concern is that GiveWell's estimates tend to follow a highly skeptical methodology, and I worry that most comparisons with GiveWell aren't comparing apples to apples on that front. I could imagine there being orders of magnitude in this, but from skimming the other comments on the thread I would guess not enough orders of magnitude to bridge all of the gap (i.e. even a GiveWell-skeptical assessment of ... (read more)

In the interests of furthering the debate, I'll quickly offer several additional arguments that I think can favour global health over animal welfare.

Simulation Argument

The Simulation Argument says that it is very likely we are living in an ancestor simulation rather than base reality. Given that it is likely human ancestors that the simulators are interested in fully simulating, other non-human animals are likely to not be simulated to the same degree of granularity and may not be sentient.

Pinpricks vs. Torture

This is a trolley problem scenario. It's also ... (read more)

5
Ariel Simnegar 🔸
Thanks for the comment! I've always heard "pinpricks vs torture" or the Omelas story interpreted as an example of the overwhelming badness of extreme suffering, rather than against scope sensitivity. I've heard it cited in favor of animal welfare! As one could see from the Dominion documentary, billions of animals live lives of extreme suffering. Omelas could be interpreted to argue that this suffering is even more important than is otherwise assumed. I think it's hard to say what the simulation argument implies for this debate one way or the other, since there are many more (super speculative) considerations: * If consciousness is an illusion or a byproduct of certain kinds of computations which would arise in any substrate, then we should expect animals to be conscious even in the simulation. * I've heard some argue that the simulators would be interested in the life trajectories of particular individuals, which could imply that only a few select humans would be conscious, and nobody else. (In history, we tell the stories of world-changing individuals, neglecting those of every other individual. In video games, often only the player and maybe a select few NPCs are given rich behavior.) * The simulators might be interested in seeing what the pre-AGI world may have looked like, and will terminate the simulation once we get AGI. In that case, we'd want to go all-in on suffering reduction, which would probably mean prioritizing animals. I agree with you that many claim the moral value of animal experiences is incommensurate with that of human experiences, and that categorical responsibilities would generally also favor humans.

I think of this question mostly in terms of the trajectory I think this nudges us towards. It feels like there's something of a hierarchy of needs for humanity as a whole, and getting out of the zone where we have extreme poverty feels like the right first step, in a way that makes me feel more optimistic about wise decision processes being able to rise to the top thereafter.

I'm not certain what current spending looks like; that might make me change my mind here. (I think it's definitely right to start ramping up spending on animal welfare at some point before poverty is entirely eliminated.)

5
JackM
Generally I think that those in richer countries are going to shape the future not those in poorer countries, so I'm not sure I agree with you about "wise decision processes" rising to the top if we end extreme poverty. For example, if we create AI that causes an existential catastrophe, that is going to be the fault of people in richer countries. Another example - I am concerned about risks of lock in which could enable mass suffering to persist for a very long time. E.g. we spread to the stars while factory farming is still widespread and so end up spreading factory farming too. Or we create digital sentience while we still don't really care about non-human sentience and so end up creating vast amounts of digital suffering. I can't see how ending poverty in lower income countries is going to reduce these risks which, if they happen, will be the fault of those in richer countries. Furthermore, ending factory farming seems important to widen the moral circle and reduce these risks.
3
Owen Cotton-Barratt
I don't disagree with you that rich countries are likely to have disproportionate influence; but I think that the presence or absence of extreme poverty in the world they're living in will have more influence on their implicit decision algorithms than you're suggesting. I think eliminating global poverty would have a significantly bigger effect reducing the risk of AI catastrophe than would eliminating factory farming. I do think I hadn't properly considered the impact of potentially-short AI timelines on this question, and that pushes in favour of animals (since there's more room for value shifts to happen quickly than economic fundamentals to shift quickly).
2
JackM
I'm skeptical of this link between eradicating poverty and reducing AI risk. Generally richer countries' governments are not very concerned about extreme poverty. To the extent that they are, it is the remit of certain departments like USAID that have little if any link to AI development. If we have an AI catastrophe it is probably going to be the fault of a leading AI lab like OpenAI and/or the relevant regulators or legislators not doing their job well enough. I just don't see why these actors would do any better just because there is no extreme poverty halfway across the world - as I say, global poverty is way down their priority list if it is on it at all.
4
Owen Cotton-Barratt
This isn't about the ways they explicitly care and work on global poverty. This is a holistic sense that the existence of extreme poverty in the world is a driver of a feeling of fraughtness, nationalism, and poor decision-making in rich countries (cf. attitudes towards immigration today; and how past eras with more extreme poverty tended to have more war). If we could choose a world without extreme poverty to develop AGI, compared to one with extreme poverty, I wouldn't be confident, but I definitely would think it was a meaningful edge (enough to bet on). I think the corresponding effects for factory farming are quite a bit weaker (though for sure there are still effects there).
2
JackM
OK thanks for your perspective, although it doesn't seem convincing to me. I could be more convinced by an argument that inequality / poverty in rich countries results in poor decision-making in those same rich countries.

Animal welfare getting so little[1] EA funding, at present, relative to global health, seems to be an artefact of Open Phil’s ‘worldview diversification,’ which imo is a lacklustre framework for decision-making, both in theory and (especially) in practice: see, e.g., Sempere (2022).

Cost-effectiveness analyses I’ve seen indicate that animal welfare interventions, like cage-free campaigns, are really excellent uses of money—orders of magnitude more effective than leading global health interventions.

Though not central to my argument, there’s also the mea... (read more)

Slightly leaning towards devoting more funds to GHD, even though I think there's enough likelihood that animals can suffer as intensely as humans.
My main reason for favouring GHD slightly is that improvements in human wellbeing, education and (political) empowerment seem paramount to the goal of increasing animal welfare long-term. 

3
CB🔸
How do you reconcile this with countries developing more and more factory farming and increasing animal consumption when they get richer? (Such as China?)
2
OscarD🔸
Maybe the idea is some sort of meat-eating Kuznets curve? Though evidence for that is limited.
1
CB🔸
Wow, as expressed in your link, the evidence for a Kuznet curve in this domain is indeed pretty poor.
1
MinusGix
Increased plenty leading to increased morality, better societies leading to more focus on effective altruism (in the general sense, rather than this specific movement), more wealth to throw around, work at replacing meat with fake alternatives (which should speed up as technology increases too), advanced technology which makes the previous far quicker to iterate on. I certainly expect in the short-term to see increasing factory farming, but I don't see the current economies as being a sign that it will continue indefinitely. China, for example, is rich but is also a very quickly growing economy from not-so-rich—they will scale up their meat production quickly to meet their population's needs and desires. This, to me, is like asking why they would build a bunch of okay concrete apartments rather than fancier apartment designs from 201x with design issues despite the nicer status—because they need a lot of living space/food, the existing methods of breeding a massive amount of animals work easily at scale, the technology for it isn't super widespread or optimized, and they aren't massively rich in that they can easily rebuild that entire industry. They prefer eating meat and so the market meets that demand, if they could have been served by just as good alternatives that were cheaper then they would eat those.  I don't see notable reasons to believe that farming a chicken will be the most efficient way of producing meat for the next fifty years given the rate of technology. (And that's on the tail-end, even without AI beyond the current, there's a lot of solutions to extract in fifty years.) (Current alternatives have various issues. Not tasting as good. Expense. Political polarization. The first two are entirely solvable with enough effort or a generation which grew up with both. The third is a problem but is something that would be faced in any spending on animal welfare as well, and I don't consider it a fundamental political disagreement which I think indicates
1
CB🔸
I understand the point, but I'm skeptical that investing in economic growth is the best way to lead to this desired result. If the goal is to obtain alternative proteins faster, then it seems vastly more effective to support funding and R&D for alternative proteins directly (you seem to agree with that in part).  Beyond that, it's not guaranteed that new technologies will automatically displace all types of animal farming - see this post for a good overview of that. 
1
MinusGix
Hm, the post is good, though I see this as a relatively weak statement. The amount of factory farming would be massively lower than now, presuming that my assumptions about people switching off of meat are true.   I also expect that if we get a big boost of technology (even better AI-driven protein/chemical synthesis or discovery), then I don't expect the argument that we'll still farm them for dyes and such to hold. As the years go by, it becomes ever more feasible to synthesize those useful dyes or materials directly. The outcome described there would still be a lot less (several orders of magnitude?) factory farming. I do think this point depends notably on how soon you think the technology will occur and solve a lot of the general problem (getting chemicals/proteins en masse). I find it plausible that it will come before we solve various meat alternatives (in the better or equivalent price/taste/cost/nutrition sense), but also find it mildly plausible that it takes a decade or two after.   Point three of the article that AI will make factory farming more efficient is true, but also I don't have a reason to believe the final conclusion. Big data analytics does not provide notable evidence to believe that factory farming will outcompete alternative methods in the long run—it is an argument that they aren't constant and so have a longer shelf-life than the naive extrapolation. Growing an animal simply requires a lot of work and energy in a specific form that I don't have any reason to believe alternative meats require as much. As for the bioengineering example, a similar argument implies. To me, this is like saying that a person down in a mine is always going to be more efficient and scalable than a digging machine. Point four is one that I think fills out a difference in vision. They compare it to chocolate which hasn't been replaced. My model of the world is that over the next thirty or so years we have significant advancement in fields like chemical synthesis
1
CB🔸
It's possible that animal products may be replaced in the future. But I think it's risky to assume that it will automatically happen just by boosting our technology. Growing a full animal is already inefficient in terms of calories produced, land use, climate emissions, etc. A lot of the reason we keep doing it is due to taste (not easy to simulate everything), cultural factors (some countries already banned cultivated meat), and habits.    Other reasons include intensive lobbying from the meat industry, which has managed to get a lot of subsidies, is lobbying against alternatives (e.g. passing laws preventing them from being called meat, calling them ultra-transformed), etc. Another element is simply that meat is associated with status in many countries. Then again, it's possible that technology gets good enough to replace the vast majority of animal use. But it's much less likely without interventions that boost research in alternative proteins, secure government support, fight against the lobbying of the meat industry, inform people about the benefits of doing so, etc. And we certainly shouldn't take it as something that will just happen with more technology - if it was just about efficiency we'd have already switched.
1
MinusGix
I believe it is entirely feasible to get the taste right. However, I don't believe that is a major problem. Even in the worlds where it is very expensive to get the texture exactly right, we do what many cultures have done over time and between themselves: we modify it and get used to it. Foods and other less literal tastes being so varying between cultures and even age groups makes me optimistic that even failed replications of taste/texture could replace meat simply through a change in generation where children see it as merely another food option. Though, admittedly, we are evolved to eat meat. This likely makes us more particular, yet we also prepare other foods like vegetables in exotic manners. ---------------------------------------- I don't see why you think if it was about efficiency we would already have switched. I'm somewhat confused: Current met production currently seems efficient based on people's eating habits, desire for meat, expectations about what is healthy (various people don't trust vegetarian answers for good and bad reasons), and most importantly our tech level. Is your argument that the meat industry is getting enough subsidies that they aren't truly more efficient than current alternatives? And/or that the government isn't requiring to price in the externalities of their effects on the land or climate? If they are actually less cost-effective (in terms of food produced) without the effective subsidies, that would be interesting information to learn, but I'd be somewhat surprised. It would actually make me more optimistic about the state of alternatives to meat, though I also understand that it would be a mark against my theory. (Just to be clear, I think transitioning will still take time. If we had gotten an instant win of better/cheaper/healthier alternatives back in 2015 without a slow buildup, that would have helped massively and things would have scaled up. I would expect a massive amount more beyond burger and such in stores by now
1
CB🔸
Interesting . Your take that the meat industry would still be dominant in 2-3 decades only 5-15% of the time makes me curious. This requires that tastier and cheaper meat substitutes are around the corner, or at least available medium-term in the available quantities. This is interesting, but it is not the sentiment I got from people who made future projections of alternative proteins by 2050. (I don't have the exact references in my head, sorry) For efficiency argument, it was more about "making food that uses less land and is cheaper" - but not with the same taste, so not the same comparison, you don't have to take it into account. But regarding the third section I think we are in agreement: it is worthwhile to support alternative proteins in every case, since having them decades earlier would do a tremendous amount of good. 
2
MinusGix
Just to be clear, my intended belief there was a 5-15% of meat industry taking dominance for 2-3 decades after the introduction of an alternative that is very close or beats meat entirely. Though I do think it is plausible the alternatives come soon, AI technology is advancing rapidly; and I don't believe many people's models are properly factoring in AI technology of our current level being applied to more areas, much less what we'll have in the future after significant advancements. Of course, EA tends to be a lot better at that than other charities.   As an example: Theorem proving is bottlenecked by the annoying but solvable triplet of: data, money to train larger models, and companies focusing on it. Scaling the current methods would hit noticeable limits due to planning/search being hard, but would allow a lot of automation towards proving software correct. AlphaProof itself is then a step above the methods that came before it. This could provide a good amount of value in terms of ensuring important software is correct but is generally ignored or assumed to need massive breakthroughs.   I find it plausible that more systems in the vein of AlphaFold (protein prediction, most centrally relevant to meat) can be extended to other areas of chemistry with a significant amount of time & effort to collect data and design. There's big data collection problems here, we have a lot of data about food but it is more locked away inside companies and not as carefully researched to a low level as proteins.   I know the theorem proving better than I do the AlphaFold area, but that gets across my general view of "many mental models assume too much like we are in 2018 but with single isolated notable advancements like AlphaFold/AlphaProof/ChatGPT rather than a field with much to explore via permutations of those core ideas".
1
CB🔸
Interesting. It feel like this still requires a lot of effort to make it usable in the context of alternative proteins (plus marketing, developing incentives, fighting the opposition, etc.), but if it works, that could indeed be a good news. 

Both are shockingly underfunded. But I think future generations will be even more shocked by how we treated (i.e. actively caused great suffering to) farm animals than by how we failed to help humans in dire need. 

A few reasons immediately come to mind for me:

  1. There are many more animals in factory farms than humans (scale)
  2. The average suffering of these animals is likely worse than the average suffering of humans (because animals are almost uniformly kept in horrendous conditions, while humans are not) (scale)
    1. My intuition is that the "moral multiplier" of human ability to suffer is not much higher than 1, if at all, for many animals. Animals have central nervous systems and nociceptors just like we do. Mammal suffering in particular might be close to par with humans,
... (read more)

I'm fairly convinced by the scale arguments for animal welfare, but have a slight hesitation due to worldview diversification considerations, optics, and a possible lack of room for more funding. If I had to irreversibly allocate the $100m in the next ten minutes, however, I would choose animal welfare.

Humans are just more important. If you disagree, how many chickens would you trade your mother's life for?

This feels right to say, but open to arguments against it.

The only context for me where it would make more sense to spend it on AW, would be if somehow the ripple effect from doing so would benefit humans more than investing it directly into global health.

Maybe by improving nutrition, or improving global morals by not allowing other living beings to suffer, or just having a clear conscience.

I'm not saying an animal suffering is right or acceptable, but it comes second, and will always come second to me, at least while human suffering is still so so high.

Humans are just more important. If you disagree, how many chickens would you trade your mother's life for?

This is a provocative question that cuts to the heart of the issue. Let me offer a different hypothetical to illustrate the complexity of making such moral trade-offs.

Imagine a situation where you had to choose between saving the life of a complete stranger or saving the life of your mother. I expect you would choose your mother, and I would likely do the same. The emotional bond we feel outweighs our concern for a stranger.

Now consider an advanced, benevolent alien species observing this dilemma. From their impartial perspective, your mother and the stranger deserve equal moral consideration as sentient beings capable of suffering. The aliens wouldn't prioritize one over the other based on personal attachment or individual characteristics.

Expanding this principle further: a chicken's capacity to suffer deserves moral consideration as well, even if their inner lives differ from ours. The immense scale of animal suffering in factory farms — tens of billions of sentient beings in cruel conditions — is a major ethical catastrophe from an impartial view.

I'm not saying an animal suf

... (read more)
5
Andreas Chrysopoulos
Thanks for the detailed reply! You’re right it was a provocative question trying to cut to the chase. But here’s the thing, probably every single person has people close to them that they consider family. So yeah a random stranger would be less important to ME than my own mother, but that person also has someone that cares for them like I would for my mother. How do you explain to them that their life was sacrificed for x amount of chicken? I guess for me, there is no amount of farmed animals that’s worth a human life. For as long as investing in animal welfare means that someone dies a preventable death, every single penny should be spent on humans. And this is not a simple inconvenience that we’re considering. It’s not a paper cut for the of x animal lives. It’s human lives for animal lives. Would you give your life for animals? Things to keep in mind * Realistically, spending money on animal welfare does benefit humans. Our general desire to not cause pain and suffering when we don’t have to, is important, and satisfying it would probably benefit us and the society in many ways. I would change my answer more towards the right if someone could show me some research or arguments for that. But convincing me that any amount of farmed animals is worth more than a human life.. I’m not sure what that would take.

I think animal welfare is much more cost-effective, my slight skepticism comes from the idea of positive feedback loops and the knock-on effects in other cause areas.

I think animal welfare is still very underfunded, and the problem is very bad. My main worry is tractabiilty, and whether we actually have levers to pull on to make a significant difference.

Animal welfare is just so much more neglected, relative to the scale.

However, I don't go all the way to a strong agree since I think the evidence base is weaker and am less certain of finding good interventions; along with a stronger sense of moral responsibility towards humans; along with a bigger "sentience discount" than other moral comparisons between humans and non-human animals.

Low confidence, but my intuition is that animal welfare is more neglected and would have a better ROI in terms of suffering reduced.

I really love the visuals of the voting tool, here's how we could make it even better for future iterations.

The axes currently aren't labeled and, if I'm being really honest I ended up being too lazy to vote as I would have had to count up the notches manually. I'm pretty sure I'm not the only one (see Beware Trivial Inconveniences).

I also suspect that it makes the results less meaningful. Even though people have wildly different views on what 7/10 or strongly agree means, there's still some degree of social consensus that has implicitly formed around thes... (read more)

4
Toby Tremlett🔹
That's helpful- thanks! Should be an easy one to fix next time. 
3
WilliamKiely
I like the idea of operationalizing the Agree/Disagree as  probability that the statement is true. So "Agree" is 100%, neutral is 50%, disagree is 0%. In this case, 20% vs 40% means something concrete.
5
Ben Millwood🔸
I wonder if we'd rather capture something like "how strongly this is true" (e.g. would $100m be much better spent on animals...) which captures both confidence and importance.
2
WilliamKiely
That sounds great too. Perhaps both axes labels should be possible and it should just be specified for each question asked.

Reading the discussions here I cannot shake the intuition that utilitarianism with very big numbers is once again resulting in weird conclusions. AW advocates are basically describing earth as hell with a tiny sanctuary reserved for humans that are better off than average. I need more convincing. While I cannot disagree with the math or data, I think better theories of animal suffering are needed. At what point is a brain sufficiently developed, for example, to experience suffering in a way that is morally relevant, that we should care about? Are there qua... (read more)

5
Ariel Simnegar 🔸
Hey Uri, thanks for your transparent comment! The cost-effectiveness estimates of cage-free campaigns being orders of magnitude more cost-effective than GiveWell Top Charities have several bases: 1. The Welfare Footprint Project's incredibly exhaustive deep dive into every aspect of an egg-laying hen's life: "Overall, an average of at least 275 hours of disabling pain, 2,313 hours of hurtful pain and 4,645 hours of annoying pain are prevented for each hen kept in an aviary instead of CC during her laying life, and 1,410 hours of hurtful pain and 4,065 hours of annoying pain prevented for each hen kept in an aviary instead of a FC during her laying life." 2. Welfare range comparisons between humans and chickens. Rethink Priorities' Welfare Range Project focused on finding proxies for consciousness and welfare, and enumerating which proxies various animals share with humans. Their methodology found that chickens feel pain approximately 1/3 as intensely as humans do. (Of course, different methodologies may give quite different answers.) 3. Doing the math with the suffering prevented by cage-free campaigns and Rethink's welfare ranges will give a cost-effectiveness multiplier on the order of 1000x. But even if you assign chickens a welfare range like 0.001x that of humans, you're still going to get a cost-effectiveness multiplier on the order of 10x. 4. Similarly, if you ignore Rethink's research and instead derive a welfare range from neuron counts (to penalize chickens for their small brains), you still get cage-free campaigns outperforming GiveWell Top Charities by an order of magnitude. All of this why I am quite confident that cage-free campaigns are indeed far more cost-effective than GiveWell-recommended charities.

Positive knock-on effects from funding animal welfare are likely far greater than from funding global health on the present margin. 

2
MichaelStJules
What knock-on effects do you have in mind?

Animal welfare does more to push the frontiers of moral circle expansion

Reading RP's work in the last months and the posts for debate week has made me more inclined towards AW funding. 

Unfair to ask people to consider the ethics of their food while their loved ones are dying of malaria and TB.

I am having a hard time following this. We aren't, to my knowledge, asking people whose loved ones are at significant risk of dying of malaria and TB for money. AFAIK, we're not asking them to prioritize animal welfare over their loved ones in non-finamcial ways either. Could you explain what specifically we're asking of this class of people?

6
OllieBase
On top of Jason's point, this argument presupposes that animals are food and therefore not worthy of much if any moral concern, but there are many reasons to think animals are worthy of moral concern.
2
nathanhb
Are we not discussing the situation with them? What about a Rawlsian veil of ignorance? A social contract? If these people were in the same room with you, a mother holding her dying child in her arms, and you were holding a community meeting about whether to save her child or save a cage with some chickens in it... wouldn't she be expected to have a right to at least argue in favor of her child's life? The very fact that humans are able to be part of the discussion is in fact an important argument in favor of prioritizing the needs of humans.
4
Jason
Behind the veil, I could be a chicken. If you've already decided only humans are moral patients (and so I already know I am human), the rest of the thought exercise does not seem to add much. I took Henry's argument to point to a special moral duty to one's loved ones. I have, for instance, special duties to my son. That makes certain actions appropriate or inappropriate for me; I am not going to spend money needed to save my son's life on advancing animal welfare. Telling me I should do would be pressuring me to break the special moral duty to my son. But I can't expect other people to attach any special weight to the fact that he is my son. That's why I reacted as I did.
4
nathanhb
But the only thing the chicken will say is 'bawk cluck cluck bawk'. It seems relevant that this is neither an argument for its own welfare nor the welfare of anyone else. Claude Sonnet, GPT-4o, Gemini, LLama... all of these can at least make arguments in favor of a particular social contract and plausibly could uphold their end of the bargain if allowed to make notes for themselves that they would see before every conversation.  I take you, as a moral patient, to have value in your son. The extra value you place on your son's life is a value I would count when summing up utilities for you. Also, I would consider it a predictive factor in estimating your behavior. I personally don't think there is such a thing as 'moral rules' by which it makes sense to judge you for valuing or not valuing your child above other humans with whom you are in an implied social contract. Which is to say, I am a moral anti-realist. 

Would you say children don't matter in themselves (only indirectly through others, like their parents or society more generally), when they're too young to "uphold their end of the bargain if allowed to make notes for themselves that they would see before every conversation"?

I considered chickens under different contractualist views here:

  1. Should our actions be justifiable to chickens, real or hypothetical trustees for them (Scanlon, 1998, p.183), or idealized rational versions of them? If yes, then chickens could be covered by contractualism, and what’s at stake for them seems reasonably large, given points 1 and 2 and their severe suffering on factory farms. See also the last two sections, on contractualist protections for animals and future people, in Ashford and Mulgan, 2018.
  2. Could the capacity to mount reasonable complaints be enough to be covered under contractualism? Can chickens actually mount reasonable complaints? If yes to both, then chickens could be covered by contractualism. Chickens can and do complain about their situations and mistreatment in their own ways (vocalizations i.e. gakel-calls, feelings of unpleasantness and aversion, attempts to avoid, etc.), and what mak
... (read more)
5
JackM
What if the mother wasn't there (say she is no longer alive) and it was just the dying baby? The only thing the baby would say is "wah wah wah" which is neither an argument for its own welfare nor the welfare of anyone else. (I'm trying to demonstrate that the ability to speak up for yourself shouldn't be a criterion in determining the strength of your moral rights...).
8
MichaelStJules
I would also add that animals do speak up for themselves. Some of our own arguments for our own welfare are very simple, or bottom out in simple claims like "this hurts!". Animal distress calls can effectively express "this hurts!". So, other animals plausibly do make (very simple) arguments for their own welfare or better treatment, we just need to try to understand what they're communicating.
2
JackM
Agreed!
3
nathanhb
Yes, the more complex take on the issue is to extrapolate. You can extrapolate the limited awareness of the chicken will never expand. You can extrapolate the child could grow into an adult who cared about their life in a rich meaningful way. Furthermore, you can extrapolate that this adult would be part of the category of individuals with whom you hold an implied social contract, and thus have a duty to respect and protect. Also, see my other comments elsewhere on this page for more disagreements with your view.
9
MichaelStJules
I'm upvoting but disagree-voting. Thanks for engaging with the comments here! Would you also extend this to fetuses, embryos, zygotes and even uncombined sperm cells and eggs? Is your position very pro-life and pro-natalist?
3
nathanhb
Okay, this is rough and incomplete, but better to answer sooner than keep trying to find better words. Not just contractualism. I think the cluster of (contractualism, justice, fairness, governance-design) is important, especially for arguing against majority-vs-minority situations, but it's only part of the picture.  Important to also consider the entity in question, it's preferences. It's appreciation of life and its potential for suffering. So in part I do agree with some of the pro-pleasure/anti-suffering ideas, but with important differences that I'll try to explain. Alongside this, also the values I mentioned in my other comment.  I would argue that there should be some weighting on something which does somewhat correlate with brain complexity, in the context of self and world modeling. For an entity to experience what I would call suffering, I think it can be argued that there must be a sufficiently complex computation (potentially, but not necessarily, running on biological neurons) associated with a process which can plausibly be tied to this self model. There must be something which is running this suffering calculation.  This is not distributed evenly throughout the brain, it's a calculation performed by certain specific areas within the brain. I would not expect someone with a lesion in their visual cortex to be any less capable of suffering. I would expect someone with lessons in their prefrontal cortex, basal ganglia, or prefrontal-cortex-associated area of the cerebellum to have deficits in suffering capacity. But even then, not all of the prefrontal cortex is involved, only specific parts.  I don't think suffering happens in sensory neurons receptive to aversive stimuli. I don't think an agent choosing to avoid aversive stimuli or act towards self-preservation is sufficient for suffering. Í think I need a different word than suffering to describe a human's experience. I want to say that an insect doesn't suffer, a dog does, but a human does
3
nathanhb
Ok, I just read this post and the discussion on it (again, great insights from MichaelStJules). https://forum.effectivealtruism.org/posts/AvubGwD2xkCD4tGtd/only-mammals-and-birds-are-sentient-according-to Ipsundrum is the concept I haven't had a word for, of the self-modeling feedback loops in the brain. So, now I can say that my viewpoint is somewhat of being a Gradualist over quantity/quality of ipsundrum across species. Also, I have an intuition around qualitative distinctions that emerge from different quantities/qualities/interpretations of experiences. Thus, that a stubbed toe and a lifetime of torture seem like qualitatively different things, even if their component pieces are the same.
3
MichaelStJules
I'm sympathetic to gradualism. I'm also sympathetic to the view that no number of toe stubs aggregate to outweigh a lifetime of torture (maybe unless the toe stubs together feel like intense torture).
3
MichaelStJules
Also this thread (and maybe especially my response) may be useful.
5
JackM
This moral theory just seems too ad-hoc and convoluted to me and ultimately leads to conclusions I find abhorrent i.e. animals can't speak up for themselves in a way that is clearly intelligible for humans so we are at liberty to inflict arbitrary amounts of suffering to them. I personally find a utilitarian ethic much more intuitive and palatable, but I'm not going to get into the weeds trying to convince you to change your underlying ethic. 
1
Nathan Young
Can I push you on this a bit?
7
Henry Howard🔸
Sure

I will comment based on my personal experience as a small-scale poultry farmer. Due to space limitations, I chose to use the battery cage system for egg production. Ideally, I would have preferred a cage-free system, but the cost of building such infrastructure was beyond my reach. While it is unfortunate, this highlights the challenges many farmers face. When advocating for improved animal welfare, it is essential to understand these constraints and provide education, as well as financial support, to help farmers transition to more humane and sustainable ... (read more)

Animal welfare is (even) more neglected than global health. My sense is that $100M being spent in a coordinated manner would have an outsized effect on the field. It would help catalyze future organizations and future funding to a greater extent than it would if spent on global health.

How does marginal spending on animal welfare and global health influence the long-term future?

I'd guess that most of the expected impact in both cases comes from the futures in which Earth-originating intelligent life (E-OIL) avoids near-term existential catastrophe and goes on to create a vast amount of value in the universe by creating a much larger economy and colonizing other galaxies and solar systems, and transforming the matter there into stuff that matters a lot more morally than lifeless matter ("big futures").

For animal welfare spending, then, pe... (read more)

I currently agree pretty strongly, because the basic case for the quantity of animal suffering in factory farms is very strong. My uncertainty is over the tractability, and I hope to learn more about that, and adjust my vote, during the week. 

1
Steven Rouk
I added a couple notes about tractability in my comment here, if anything in there is new information to you: https://forum.effectivealtruism.org/posts/nrC5v6ZSaMEgSyxTn/discussion-thread-animal-welfare-vs-global-health-debate?commentId=oKbr42XMkwu8bgCMt. I think there are a lot of impactful, established initiatives that could utilize extra funding, as well as some newer initiatives that are very promising that could be scaled up. Plus lots of work to be done in neglected regions.

In terms of EA charities most commonly cited in these areas only, I think global health charities are much more well evidenced.

I think the most effective animal welfare interventions are probably more effective, I'm just much less sure what they are.

Animal welfare is more important and more neglected, although tractability is less clear.

I want to note that this is more consensus than I thought in favour of the proposition. I would have guessed the median was much nearer 50% than it is. 

  • Cost-effectiveness estimates generally suggest that, for most reasonable assumptions about the moral weight and degree of suffering of animals, animal welfare interventions are most cost-effective
  • Animal welfare is more neglected than global health, but not (again for reasonable assumptions about how much animal wellbeing matters) proportionally less important

epistemic status: extremely quickly written thoughts, haven't thought these through deeply, these are mostly vibes. i spent 10 minutes writing this out. i do not cite sources.

  • seems like non-human animals are suffering much more than humans, both in quantity of beings suffering & extent of suffering per being
    • it might be that non-human animals are less morally valuable than humans — i think i buy into this to some extent, but, like, you'd have to buy into this to a ridiculously extreme extent to think that humans are suffering more than non-human animals
... (read more)

By my count, animal welfare is 100x more neglected than global health (Edit: global development, not global health — my mistake). I'm unsure how much bigger it is in scale (given that making trades between humans and animals is hard) — but I'd guess it's very very much larger in scale.

seems like the marginal value is much higher

(Placement confidence: fairly low) 

I'm envisioning putting the $100MM in a trust and (as an initial strategy) spending ~$15MM extra a year until expended.

I think others have stated the general case for animal welfare as particularly neglected (although I do not agree with many of those posts asserting an astronomical difference for various reasons). So I'll focus this comment on why I didn't initially place further along on the animal-welfare side:

(1) I tentatively think AW work can be particularly high effectiveness because it is often so leveraged; ... (read more)

While I deeply value human welfare, I believe the combination of vast scale, neglectedness, and tractability makes a compelling case for prioritizing animal welfare more than we currently do — especially from an impartial, evidence-based perspective. Many on the opposing side mention that they assign more moral worth to humans than non-humans, but I don't think that view is incompatible with allocating more resources towards animal welfare.

If it can be spent over any period of time, this removes my biggest concern (that we wouldn't find $100m worth of good opportunities for animals). 

I am voting based on my prior that animal work is more impactful as it is more neglected and there are so many more animals.

No one is purely impartial. Virtually everyone allocates more resources to themselves than a stranger. Almost every parent will allocate more resources to their children than a stranger. Many choose to allocate more resources to a sibling or close friend in need even if a stranger is in “more” need.

Impartiality is a spectrum, and it is driven by personal beliefs and values. I’m more partial towards humans on this spectrum than many other voters on this poll. From a positive utilitarian perspective, a human life that is saved has more potential to make thei... (read more)

6
Ariel Simnegar 🔸
Hey Josh, just drilling in on the claim that "any currently existing moral value comparison that results in the decision to donate significantly more money towards animal welfare than people must be under valuing the welfare of humans". Do you agree that that basically implies that humans are worth infinitely more than animals? Because if e.g. we can spend a dollar to prevent one person's paper cut, or prevent one trillion dogs from being skinned and boiled alive, this would imply we should prevent the paper cut. If you're instead saying that you think any value system should assign at least the majority of the global philanthropic budget should go to humans rather than animals, I still think you should be in favor of allocating this marginal $100m to animals, given that this is the current split of spending on humans vs farmed animals:
2
Josh Piecyk 🔹
No I don't agree that my claim implies that humans are infinitely more morally valuable than animals, rather they are significantly more valuable. I believe that we are currently allocating too much resources to animal welfare. I don't think it's productive to think of hypothetical scenarios that are extremely far detached from reality like your paper cut scenario. Instead, I'm imagining a child in sub-saharan Africa going blind due to malnutrition or dying from malaria, and I'm having a hard time imagining prioritizing the welfare of any amount of chickens over that child. I acknowledge that the non-infinite number exists, maybe it's 100,000 or 1,000,000 cage-free chickens in exchange for a human life. However, it seems clear to me that the magnitude of current human suffering deserves every marginal bit of resources it can receive at the cost of helping animals. 
3
Ariel Simnegar 🔸
Historically, that number has been 30,000 to 400,000 cage-free chickens in exchange for a human life. (Using $5000 to save a life through the Against Malaria Foundation, compared to moving 9 to 120 years of hens' lives per dollar to a cage-free environment, and a lifespan of 1.5 years per hen.) So we are currently within the margin of error for the ballpark you quoted. Perhaps, given that you're partial to humans over animals due to our shared species, that's not enough for you to allocate the marginal $100m to animals. But maybe that shifts your degree of certainty that we should allocate it all to humans?
3
Josh Piecyk 🔹
Thank you for showing me that calculation. Upon further thought, I think my belief is more along the lines of 1,000,000 to 3,000,000 chickens being equivalent to a single human life.  Based on suffering reduction alone, my opinion is that the weight of human suffering carries at least 1,000,000 times more weight than a chicken. When also considering the potential indirect positive impacts a human can have, as well as the difference in experience size between humans and animals, the decision to prioritize human welfare over animal welfare becomes even clearer to me. I hope our society reaches a point at which human suffering has been reduced to the point that we can focus on animal welfare, but I think we're likely decades away. I'd like to caveat this by saying I'm rather new to effective altruism, and I expect my views to evolve the more I learn. I'm curious to gain a deeper understanding of the underlying philosophical premises inherent in some of the views expressed here. 

I support both clauses. I see a moral argument or at least a reasonable justification for favoring humans over animals, holding measured 'ability to feel pain constant'.

However, I'm convinced by the evidence that funding to support programs like corporate campaigns for cage-free eggs are likely to be effective, and have vastly higher welfare gains per dollar, by most reasonable measures of relative chicken/human welfare.

The animal welfare space has very little funding and $100 million is likely to make a substantial positive difference, both directly/immediately, and in shifting cultural and political attitudes.

I don't think animal interventions are worse but I do I think the statement is wild speculation. I don't think EAs can effectively compare interventions between very different cause areas. 

I suspect most EAs don't actually think through their own cause prioritisation, I think they instead defer to others, and thus don't view the consensus as compelling evidence to change my mind.

Evidence that ripple-effects of interventions are negligible would change my mind though. I find the EV calculations for the short-term supremacy of animal welfare interventio... (read more)

I am very convinced by the arguments presented in Ariel Simnegar's "Open Phil Should Allocate Most Neartermist Funding to Animal Welfare". I still have uncertainty in moral weights so am not 100% agree

In a nutshell - there is more suffering to address in non-human animals, and it is a more neglected area.

Seems likely correct. I'm not fully certain because I wouldn't be that surprised to be wrong. It is much easier to help animals than people on the margin.

I update a bit more because I haven't read good arguments against and have seen some possible arguments debunked.

Many global health interventions plausibly have negative effects on animal welfare (e.g., increasing factory farming). The inverse doesn't seem as true.

Due to their neglectedness (and the lack of animal participation in markets) animal interventions are also probably more efficient at converting $$ -> utils

I recently learned that the animal welfare accounts for only 3% of EA's funding, which seems far from proportional to other causes, taking into account the number of people affected, the degree of certainty that the pain exists (it's not hypothetical) and the intensity of the pain experienced. It therefore seems to me to be too neglected.

  1. Animal welfare is extremely neglected compared to human philanthropy. (However, effective interventions receive only a small fraction of altruistic funding intended to help humans.)
  2. I'm highly uncertain about counterfactuals and higher-order effects, such as changes in long-term human population and eating patterns due to accelerated global economic development.

It's much easier to fundraise for GH&D (less "weird" / more legible)

8
david_reinstein
I agree, but I'm not sure that's relevant to what the question is asking? I think it presumes you have the money to spend ... or have the ability to shift the funds.

As several posts here have already highlighted, the total suffering endured by animals is far greater than that experienced by humans—unless we consider animal moral weights to be hundreds or thousands of times less important than our own.

Moreover, as shown by the Rethink Priorities researches, the cost-effectiveness of the best animal welfare organizations could be a lot more effective than the best short-term alternatives for humans.

Yet, funding for animal causes represents only a small fraction of Open Philanthropy's budget, which seems inconsistent.

If ... (read more)

Animal welfare space needs considerably more rigorous research (which should be done, but will cost much less than $100m) before knowing if it includes better investments than the top ones in global health.

Industrialised animal farming is the single biggest cause of suffering, the most neglected / under-reported and under-funded and therefore deserves all the funding it can possibly source. Moreover, reducing animal agriculture would also reduce risks (zoonoses / pandemics); environmental harms and improve human health outcomes. It would be a win-win for multiple cause areas.

The neglectedness and intensity of animal suffering would, in a triaging scenario, see me prioritizing it over many global health interventions. I am open to the idea that many animals do actually live lives worth living. That said, I suspect I would rather forego living than spend an existence as an industrially farmed broiler chicken or fish. These are the animals I would spend money on at the margin, with research into the welfare of more liminal animals like crustaceans and insects (probably not large increases for wild for now, since tractability may ... (read more)

Human welfare seems much less neglected than the welfare of factory farm animals. Even just an egg may represent many hours of suffering to produce. If insects are not so much less sentient than humans, their welfare could be a huge deal too.

So I favor animal welfare. But it's even better when it's backed by strategic thinking and a clear theory of impact. The total number of future sentient beings could be many orders of magnitude greater than the number of existing ones. We are unable to "feel how big" those numbers are, but it matters a lot, and it's no... (read more)

The main reasons for going as far to the animal welfare side as I did:
-I suspect there are more unexplored opportunities to have an outsized impact on the animal welfare side due to neglectedness.
-The scale of the problem is very larger (~100 billion lives a year in meat production, and that's not even the entire problem).
-The meat eater problem plays a part as well. If you save someone and they go on eating meat, that could have a negative impact as well. However, this line of argumentation might be a can of worms.

Global health still has some weight due t... (read more)

I tend to agree with Ariel Simnegar's "Open Phil Should Allocate Most Neartermist Funding to Animal Welfare", however I still have some uncertainty in moral weights.

  1. It's almost impossible to predict the long term effect.

    Humans are the only actors that can produce moral force. But the problem is, how much AW will change indirectly by spending on GH? I have neither evidence nor instict to guide me on this.

  2. This question lacks the context of how we use money, so our answers would vary a lot without a consensus.

    ”It would be better for EA practitioners to spend ..." is different from "It would be better for existing major organizations to spend ..." in terms of cost effectiveness.

    My feeling is slanted towards disagree, but I'm sure it's biased. I simply don't know much about AW, so I choose neutral.

It seems plausible animals have moral patienthood and so the scale of the problem is larger for animals whilst also having higher tractability. At the same time, you have cascading effects of economic development into better decision making. As a longtermist, this makes me very uncertain on where to focus resources. I will therefore put myself centrally to signal my high uncertainty.

Animal welfare seems likely more tractable, substantially more important, and vastly more neglected. 

I based my vote on the fact that I have close to 0 doubt about the fact that antispecism is true (the fact that you can't discriminate someone on the base of his specie). 
If you consider antispecism true, you have to take in consideration that humanity is a really small part of all animals living. Moreover, we have pretty good reasons to think that animals are living in worse conditions than humans (pretty obvious for farm animals that live in industrial farms, more challenging intuitively for wild animals but many studies make us things that sufferin... (read more)

I read somewhere that around 2% of EA donations are allocated towards animal welfare. I don't know what an ideal world's split would be, but it would have AW funding at a lot higher than 2%. 

Not high confidence. I'm guessing 100m is tiny for global health and large for animal welfare. Still, I value human well-being over animal well-being, other things equal.

Their suffering is worse when considering amount of suffering x amount of people. Animal welfare work could include expanding the moral circle, which could end up benefiting global health, but I don't expect the reverse to be true.

No clue, tough question

Several factors make me confident regarding the importance of this choice : the sheer scale and intensity of the suffering involved, the lower cost of helping nonhuman individuals in farms compared to humans, and the comparative small size of the aniimal welfare / advocacy movement giving $100m a potentially more important long-term impact.

The default trajectory for animal welfare looks grim, extremely grim, and does not seem about to reach a tipping point anytime soon. I do believe that a pig that shrieks is in pain, and that inflicting this pain is immoral.

I am more uncertain when it comes to tractability. I also favor pluralism and tend to view things with an inner preferential voting system to adjudicate my moral uncertainties.

Uncertain of how big of a %-increase a $100m addition is for the animal advocacy movement. But definitely a lot higher than for global health. While animal advocacy is much more neglected I'm wondering how much more funding it can effectively absorb, or how fast the cost-effectiveness would decline. Given the scale of the problem probably not that fast? For global health, I believe this funding wouldn't have substantial decreasing marginal returns. (Some quick thoughts without having read others' comments)

Animal welfare is much more neglected than global health (though maybe a bit less tractable).

JamesÖz's post explaining that the default trajectory for animal welfare is far worse than the default trajectory for global health.

Animal welfare is much more neglected than global human health. Even if there were strong arguments on scale and solvability in the opposite direction, I don't believe they can tip the scale.

One point that I think does not get discussed enough (though I will be happy to be corrected on this) is that animal welfare work in many cases is going up against resistance from some of the biggest and most politically powerful industries in the world. I did see a reference to animal welfare being more politicized in this post, but I think it needs to be emphasized more that it's not just "politicized" but in fact is in opposition to the political power of huge corporate entities like Tyson Foods, JBS, Smithfield, etc. who from what I can tell have just ... (read more)

1
CB🔸
I agree that there is opposition from companies, but I think that this aspect is already included in the cost-effectiveness estimates given to animal welfare work? Regarding reducing suffering by gene editing, it sounds promising on paper, but is it actually available, or close to be? I haven't followed the topic closely, so I'm wondering. 
2
Adam Shriver
That's an interesting thought. But how exactly do you price that opposition in, since it presumably depends on the psychology of the people who run those companies? The degree to which companies are willing to fight various projects would make a huge difference, and they ultimately have more financial resources at their disposal than the EA movement. On a more optimistic note, maybe cases like the passage of Proposition 12 in California show that even these companies lack the power to stop certain well-targeted approaches. (fingers crossed that the new Farm Bill in the U.S. does not roll back this progress). Is gene editing to reduce suffering available? There has been a good amount of research that shows knockouts of different aspects of pain and other negatively valenced states in laboratory animals. How close it is to being actually available would depend on what would be needed to get regulatory approval and what additional tests would be required in order to feel confident that the modifications are actually improving welfare (since we run into the epistemic questions about knowing what non-verbal animals are actually feeling). 
1
CB🔸
When I say the opposition by companies is included in the cost effectiveness analysis, I mean that most of the cost of dedicated for, say, cage free campaigns, is dedicated to convince companies (through different tactics, from négociation to pressure campaigns). When someone says "cage free campaigns are cost effective" they mean "paying people to influence large companies brings out enough positive effects to be effective, despite opposition"
1
Adam Shriver
I see what you're saying. It might be worth noting that the companies that are hopefully being convinced by these campaigns (such as companies like McDonalds or Walmart that sell food directly to consumers) are not the companies I have in mind as being both politically powerful and vehemently opposed to change (meat "producers" like JBS, Tyson, Smithfield).

There are more sentient beings affected by factory farming and the problem is more neglected.

@Toby Tremlett🔹 is there a way to see the final debate week banner? I wanted to include a screenshot in the slides for my local group's next meetup, but can't find a way to access the banner now that debate week is over.

3
Toby Tremlett🔹
Good question! This has been on my mind - yes, in the future we will find a way to show debate week banners after the event (had a dream about it last night- but the design in my dream was very ugly). For now, I'll ask a dev if I can get a screenshot to put in a quick take and link here. 
2
Toby Tremlett🔹
Here you go

I'm very unsure, but slightly lean towards animal welfare due to the heuristic that the further outside typical moral circles the more neglected are the opportunities

Each individuals qualia being equal, healthier and happier humans actively improve the future whereas healthier and happier animals do not.  

 

Mostly the meat-eater problem, also cost-effectiveness analyses. Also higher neglectedness on priors.

The amount of suffering is orders of magnitudes above in the cause area of animals.

I think most of my reservations are mostly deontological, plus a few fringe possibilities

3
Nathan Young
Would you like to expand on this a bit?
3
Cameron Holmes
I'm completely sold on the arguments in general EV terms (the vast suffering, tractability, importance, neglect - even within EA), up to the limits of how confident I can be about anything this complex. That's basically the fringe possibilities - weird second, third-order impacts from the messiness of life that mean I couldn't be >98% on something like this. The deontological point was that maybe there is a good reason I should only care or vastly weight humans over animals through some moral obligation. I don't currently believe that but I'm hedging for it, because I could be convinced. I realise now I'm basically saying I 90% agree that rolling a D20 for 3+ is a good idea, when it would be fair to also interpret it that I 100% agree it's a good idea ex ante. (Also my first comment was terrible, sorry I just wanted to get on the board on priors before reading the debate)

My soft sense is that great opportunities in the animal space face greater funding constraints than in the global health space. 

I am quite receptive to caveats about how easy it is so scale current orgs and interventions, but that seems more of a practical issue (than can partially be solved through more money?).

Other than that, I just think it's a crazy scale of very neglected suffering and the sooner we figure out how to make significant changes to the system the better.

The $100m is much more likely to make irreversible progress on solving animal welfare issues than it is on global health, because the latter is way less neglected.

99 % of sentience is non human animals + the worst suffering in the world are the animal ones.

Roughly 60% of all mammals are raised for food globally and over 85% of the world eats or seeks a diet consisting of animal meat and a larger percent for animal based foods. Assuming this investment produces healthier animals that do not require antibiotics or other mass livestock farming practices that negatively impact humans, the investment is a net positive long-term. Healthier food could lead to a healthier society which improves economies in various ways. 

The question is too vague to agree with to the nth degree. However, global health is heavil... (read more)

Animals win on scale & neglectedness while humans win on my (and maybe God's) speciesism bias (but if God exists I think He would appreciate us trying to help out animals i.e. His creations).

The scale of animal suffering and exploitation is vast, yet it remains one of the most neglected moral crises of our time. Beyond its ethical implications, animal farming is a key driver of some of the most pressing global public health challenges, including antibiotic resistance, zoonotic pandemics, and the rise of chronic diseases. Heart disease, the leading cause of death globally, is closely linked to the consumption of animal products—meat, eggs, and dairy—laden with saturated fat, endotoxins, and inflammatory compounds. By shifting away from animal f... (read more)

I assume that the primary goal is to reduce extreme suffering or negative experiences. Based on the evidence I've reviewed, efforts to alleviate suffering in factory farming appear to be far more cost-effective in achieving this goal.

I don't see compelling evidence that improvements in global health significantly enhance worldwide peace and security, which could potentially reduce existential risks from advanced AI. This connection would have been, in my view, the strongest argument for prioritizing global health interventions.

While I believe global health... (read more)

Many individuals and organizations are already concerned with global health and actively working to improve it. However, animal welfare requires a significant initial effort to elevate this pressing ethical issue in the public's priorities.

Last time I checked, improving the lives of animals was much cheaper than improving human lives; and I don't think that arguments saying that humans have more moral weight are enough to compensate.

Will there be any follow-up survey on this? I'd be interesting in knowing what people learned and changed their minds about, and how people's views changed on animal welfare vs global health overall.

I know this is a debate, but one thing I want to touch on is that animal welfare and human welfare are not necessarily in conflict. I think initiatives like preventing the rise of factory farming in the developing world could be really great for both animals and humans. Animals wouldn't have to exist in horrible conditions, and humans could (as far as I know; don't have sources with me right now) have greater food, water, and resource security, reduced ecological/climate devastation, and reduced risk of disease, to name a few things. I think it's important to think about ways in which we can jointly improve animal welfare and global health, because we all ultimately want to create a better world.

  1. I feel like animal welfare is based on incorrect philosophical arguments. I do not think that animals (sentient) suffer in the same sense that humans (sapient) suffer. I do not believe that any amount of the qualitatively different animal suffering adds up to any amount of human suffering. They are non-commensurate. For more detail, see here: https://www.lesswrong.com/posts/Htu55gzoiYHS6TREB/sentience-matters?commentId=wusCgxN9qK8HzLAiw  In accordance with this view, I argue that not a single dollar should be spent on animal suffering while there is s
... (read more)
4
MichaelStJules
Does urgency (point 2) apply to global health specifically, given the debate topic of animal welfare vs global health? Maybe we can consider biorisk, including biorisk from TAI (EDIT: and other ways we might all die, and other GCRs), to fit inside global health, but I don't think that's what's usually intended.
1
nathanhb
Global health is about the lives of humans and human suffering. It seems to me that AI safety is the #1 global health issue at large in our current world. But considering that you mean 'health interventions for poor people', how do you separate that from AI risk? If you have good reason to believe that if you fail to act then the person will be killed in less than a decade, and so will all animals, all life on Earth... Seems odd to me to be putting 'treat curable diseases of human population x' into a different bucket than 'keep human population x from being murdered'. Aren't these both health interventions? Don't they both deliver QUALYs?
4
MichaelStJules
I agree you can consider them "health interventions", but I think what people have in mind by global health in general and in this debate are mostly GiveWell recommendations, and maybe other cause areas in Open Phil's Global Health and Wellbeing focus areas, which are separate from global catastrophic risks (GCRs). Maybe the line is somewhat artificial. One reason to separate GCRs from global health is that GCRs and GCR interventions seem very one-shot,[1] poorer evidenced and much more speculative than many global health interventions, like GiveWell recommendations. If you want to be more sure you're making a difference,[2] GiveWell recommendations seem better for that. 1. ^ Betting around whether a global catastrophe occurs at all, with highly correlated individual outcomes, not individual deaths separately, e.g. one case of malaria prevented. 2. ^ Although perhaps a very different difference from what GiveWell estimates, since they don't account for the possibility that we all get killed by AI, or that the lives we save today go on for hundreds of years due to technological advances.
3
nathanhb
Well, if AI goes well, things on my short list for what to focus on next with the incredible power unlocked by this unprecedentedly large acceleration in technological development are: alleviating all material poverty, curing all diseases, extending human life, and (as a lower priority) ending cruel factory farming practices. This critical juncture isn't just about preventing a harm, it's a fork in the road that goes either to catastrophe or huge wins on every current challenge. Of course, new challenges then arise, such as questions of offense-defense balance in technological advancements, rights of digital beings, government surveillance, etc. Edit: for additional details on the changes I expect in the world if AI goes well, please see: https://darioamodei.com/machines-of-loving-grace
3
emre kaplan🔸
I think this level of incommensurability is both contradictory with folk ethics(most people I speak with agree that preventing animal torture is more important than preventing mild human headache) and it's a pretty confident view that assigns a very low weight to the animals' interests. Do you think our reasoning in moral philosophy and understanding of animal biology is reliable enough to be that confident?
2
nathanhb
I think you misunderstand my point. I would also say that minor temporary inconvenience is incommensurate with severe crippling harm or death. So I would say that no amount of animal deaths or torture could be commensurate with the life of a human. And I would say that no amount of animal headaches could be commensurate with a human headache.  Here's a discussion I had with Claude Sonnet 3.5 on the subject: Nathan:  On the question of animal welfare and folk ethics. I want to explore what you estimate a typical viewpoint for most humans would be on the following ethical dilemma: Suppose a room contains a human child, and a container of shrimp. You must choose to kill either the human child or all the shrimp. Which should you choose?   Claude Sonnet 3.5: When considering this ethical dilemma from the perspective of what a typical human viewpoint might be, I would estimate that most people would choose to kill the shrimp rather than the human child. Here's a breakdown of the reasoning that might lead to this common stance: 1. Human life valuation: In most human societies, there's a strong cultural and emotional emphasis placed on the value of human life, especially that of children. This often stems from our biological instincts as a species, as well as societal norms and moral frameworks. 2. Cognitive capabilities: Humans generally attribute higher moral status to beings with more advanced cognitive capabilities. A human child has greater potential for complex thought, self-awareness, and future planning compared to shrimp. 3. Emotional capacity: Most people would assume that a human child has a much greater capacity for suffering and emotional experiences than shrimp do. This often factors into moral considerations. 4. Familial and social impact: The death of a human child would likely have a significant emotional impact on family members and the broader community, which many would weigh heavily in their decision. 5. Potential and future value: A human ch
6
emre kaplan🔸
"I do not believe that any amount of the qualitatively different animal suffering adds up to any amount of human suffering." I was responding to my interpretation of the sentence above. I agree that it's a common position to assign infinitely higher importance to saving a human life compared to preventing any amount of animal suffering. My understanding of the quote above was that you made an even stronger claim since the expression is "any amount of human suffering", which should include very low amounts of human suffering. But I still think folk ethics on this issue is overconfident and doesn't take moral uncertainty properly into account. I also think that kind of incommensurability claims face other more general theoretical problems. "Saving" a life is just another expression for extending it, since no intervention makes people immortal. That position would claim 0,0000000001% increase in the chance of prolonging a human life by one day is more important than preventing 1000000000 animals to be born into torture.

Another meta thing about the visuals is that I don't like the +[number] feature that makes it so can't tell, at a glance, that the voting is becoming very tilted towards the right side

under neartermism, which is not my view but which may be the spirit of the question, animal welfare seems obviously better because non-human animals are extremely neglected by human civilization, either left to die in the wild or cut up in mechanized torture facilities.

under longtermism, it's basically a question of which could positively effect the values of the first agent superintelligence. probably neither would have a strong effect, but conditional on an effect being had, i'd guess it would route through the increased moral progress caused by animal welfare advocacy, somehow leading to a less human-centric forever-value.

(100% under neartermism, ~80% under longtermism in recognition of uncertainty)

I haven't seen any convincing and coherent framework that can analytically equate animal lives to human lives, but I am open to having my mind changed. My current position is informed mostly by my (flawed) intuition

5
emre kaplan🔸
Unitarian views are actually pretty common in the field. It's hard to have all three of these: 1. There is no moral hierarchy between humans, no matter what their mental capacities are. 2. Species-membership itself is merely genetics and it's morally irrelevant. What morally matters is other morally relevant capacities like sentience, consciousness, mental capacities etc. 3. There is some kind of moral hierarchy between humans and animals.

I believe that I, like many visitors on the Forum, would usually be very careful to vote on either end of an extreme. The reason I opted to move all in into animal welfare is that, while I acknowledge and put some credence on views around ripple effects and moral uncertainty (in the sense of placing some weight on societal consensus views), these views primarily have an influence on my view of how global philanthropic spending should be allocated. 

However, when it comes to an additional $100m, the (difference of) neglectedness completely wipes out the... (read more)

Somewhat neutral, though I concur that animal welfare is more neglected and that a straightforward shortterm calculation is on the animal welfare side. However:

With AI, ensuring longevity for many people may be a better use, though I'm uncertain about the exact costs. Animals are more interchangeable than humans, and will die within not that long regardless, which means that ensuring more humans live longer lives is more valuable. The more people that live longer, the more that are able to participate in a possible longevity escape velocity.  (Theoret... (read more)

3
Ben Millwood🔸
I don't think whether animals or humans are interchangeable (within each group) is the right question. If a human dies and another human is born, the latter does not replace the former in terms of their unique characteristics as an individual, their relationships, etc. But they both have lives worth living, and I don't think it's obviously the case that one long life is better than two half-long lives -- sure, there are some advantages to longer lives in terms of accumulation of knowledge, memories, relationships etc, but I think these effects are relatively minor among reasons why a life is good or rich or well-lived (e.g. I think children have rich and valuable lives -- although they miss some things about the adult experience, it's not so much to make them dramatically different).
2
MinusGix
If a human dies and we have a lot of humans very very similar to them, I think it is plausible that we've lost less. Still a negative, but not as much of one. (Which is one answer that I favor to the repugnant conclusion, you can't just add new people indefinitely). I also think this makes more sense for societies that can freely copy minds. For animals my logic was much the same, but that there's less variation/uniqueness that is lost because (for example) chicken minds have less axes on which they vary notably.  Here's another argument: I think the "one long life vs two half-long lives" is a good example, but that it matters how long they live. Better to have a parrot that lives for a year rather than two parrots that live for six months. The parrot has more opportunity to learn and build on what it has learned and gets more value out of living for longer. A chicken wouldn't have as much value because it has stronger limits on what it can learn, be curious about, enjoy, and so on. But a parrot that lives 50 years vs two that live 25? I would lean towards two.   I disagree about how much children miss from adult lives, though it depends on how young we're calling children. Children are certainly very valuable, but I do think they miss out on a lot of adult experiences. The problems they solve are less intricate, the understanding of complex joys is significantly weaker (a child playing with toys vs. reading a 150k word book), and so on. But I don't know where I'd do the tradeoff precisely. I think part of the value of children, beyond being a good in-of-themselves, is that they will grow up to be adults which have richer more vibrant and varied experiences.   However, I don't think that matters much here. I don't believe that the longevity we manage to acquire is merely one long life vs. two half-lives. It is more of a "one ten century long human life vs. (tens of? (hundreds of?)) thousands of various animals living a couple years more". I think the human has

Interesting to note that, as it stands, there isn't a single comment on the debate week banner in favor of Global Health. There are votes for global health (13 in total at time of writing), but no comments backing up the votes. I'm sure this will change, but I still find it interesting.

One possible reason is that the arguments for global health > animal welfare are often speciesist and people don't really want to admit that they are speciesist - but I'm admittedly not certain of this.

2
Jason
I think we want people to vote, and vote honestly with their beliefs. I don't think the second paragraph helps with those goals. It puts people who want to vote GH -- note that I did not -- in a position where they have to defend their votes or feel people are making inferences about their votes. A likely outcome is that they just won't vote.
3
JackM
Personally I would gain more value from knowing why people would prefer $100m to go to global health over animal welfare (or vice versa) than knowing if people would prefer this. This is partly because it already seems clear that the forum (which isn't even a representative sample of EAs) has a leaning towards animal welfare over global health. So if my comment incentivises people to comment more but vote less then that is fine by me. Of course my comment may not incentivise people to comment more in which case I apologise.
3
Jason
Yeah, my guess is that stigmatizing one possible response would additionally risk skewing the responses you do get. People usually have multiple reasons for decisions and are somewhat likely in a non-anonymous discussion to substitute a reason they perceive as socially acceptable for one they perceive as stigmatized by a decent fraction of their community.
2
JackM
I'm not sure how I have stigmatised any particular response.

Being the executive director of ACE, I'm obviously quite biased. Then again, I joined ACE because I was convinced of the need for more funding for animal health and wellbeing.

At ACE, once our current busy period has ended, we'll dive into the perspectives and arguments presented in this debate week as a team and likely post here and on our blog our reflection.

After just skimming this week's content, the arguments that I personally find most convincing come down to (1) scale and extent of suffering, (2) how little money effective animal advocacy is currentl... (read more)

With the current state of things, I do not believe that 100m will produce as much positive outcomes in specific areas of animal welfare, (say, sensitization, cash incentives for cage-free farming) compared to specific areas of global health, (say, maternal health, finding solutions to AMR, malaria prevention, NTD's).

Animal welfare has been neglected by governments and funding and yet with every growth in the human population, there is an increase in the compromise of the welfare of animals. 

Rethink Priorities moral weights

The marginal effect of increased spending (say $1 billion) on animal welfare is likely to be far greater than the marginal impact of an extra $1 billion on global health. Granted that public health challenges in low and middle-income countries can at times be substantially lessened with relatively small inputs (e.g., niacin enrichment of corn meal), overall, the impact of relatively small amounts of strategically invested money can have a significant impact on the animal space. For example, I believe the support ($1-2 million) Open Philanthropy has provide... (read more)

The scale of animal suffering is much greater than global health crisis and receives comparatively less attention.

6
Julia_Wise🔸
Welfare standards on farms like larger cage sizes, stunning before killing, etc don't have obvious benefits for humans. Arguably there are downstream benefits by making meat more expensive and thereby causing less of whatever indirect effects meat consumption creates.
3
Ben Millwood🔸
I haven't thought too much whether all animal rights interventions also improve global health, but I think even if I believed that were true, it wouldn't tell me whether they improved global health a comparable amount to working on global health directly, so it doesn't feel like the right question for deciding what the highest priority project is, IMO. (In fact I agree with your conclusion for other reasons, just wanted to flag why this argument didn't feel convincing to me.)
2
Robi Rahman
Reducing animal agriculture for the benefits to humans by reducing habitat destruction is a really roundabout and ineffective way to help humans. If you want to help humans, you should do whatever most helps humans. If you want to protect someone from climate change, you should do whatever most effectively mitigates the effects of climate change. If you want to help animals for the sake of helping animals, you should do that. But you shouldn't decide that helping animals is better than helping humans on the grounds that helping animals also indirectly helps humans.

Yet another thing to mention, although not directly related to human welfare vs non-human animal welfare... What about the moral value of AI, of digital entities? They are already far more complex and human-like than shrimp. When do they reach non-zero value?
 

 

https://www.lesswrong.com/posts/ZcJDL4nCruPjLMgxm/ae-studio-sxsw-we-need-more-ai-consciousness-research-and 

https://www.lesswrong.com/posts/pieSxdmjqrKwqa2tR/avoiding-the-bog-of-moral-hazard-for-ai 

9
MichaelStJules
I don't think current LLMs (or other current AIs) have much or any moral value, but shrimp have a decent chance of having moral value. LLMs are designed and trained to mimic human outputs (and finetuned with RL). You could train a human who is incapable of feeling pain to act like they're in pain (say, in circumstances under which a typical person would be in pain). This doesn't make them feel pain. They're missing crucial internal functional roles. The same goes for training LLMs to act like they care about anything at all. Their words don't actually indicate they care about the things they talk about or anything at all. LLMs might care about things anyway, e.g. avoiding states from which they would typically get negatively reinforced, if they had internalized the reinforcement signal, but I don't think LLMs have done this. In my view, shrimp are probably more like us than current AIs in the ways that matter intrinsically. I do think conscious AI is possible, and could come soon, though. Some other relevant discussion, assessing LLM consciousness according to various theories of consciousness: The paper is Butlin et al., 2023. Wilterson and Graziano (2021) argue that a specific artificial agent they designed is kind of conscious — both conscious and not conscious — according to Attention Schema Theory.
3
nathanhb
Thank you, Michael, for your insightful comment and very interesting source material! If you are willing, I'd love to hear your take on this comment thread on the same subject: https://www.lesswrong.com/posts/RaS97GGeBXZDnFi2L/llms-are-likely-not-conscious?commentId=KHJgAQs4wRSb289NN
-1
nathanhb
On a meta note, it is interesting to me that my comments here involving me quoting Anthropic's Claude Sonnet got heavily karma-downvoted. It seems that people here are willing to accept the possibility of non-human entities having moral value and having relevant contributions to this discussion in the form of their implied revealed opinions. Yet the opinions of an AI are considered unworthy of inclusion into the discussion. Not just disagreed with, but not worth being allowed to be present at all. I wonder how that decision will be viewed by future philosophers....
5
JackM
I have downvoted the LLM answers. I don't like your approach of simply posting long conversations with LLMs on a forum for various reasons. Firstly, your prompts are such that the LLM provides very broad answers that don't go very deep into specific points and often don't engage with the specific arguments people have put forward. Secondly, your prompts are worded in a leading, biased way. Here is an LLM opining on this very question (I know this is hypocritical but I thought it would be an amusing and potentially effective way to illustrate the point). Note the conclusion saying "leverage the LLM as a tool, not as a crutch".
1
nathanhb
I have upvoted your use of an LLM because this comment is more thoughtful, balanced, and relevant than your average comment. And much more so than the average commenter's comment in this particular comment thread. I normally don't post LLM outputs directly, but this comment thread is so full of unconsidered and unelaborated-upon opinions, I figured this would be a rare place in which the LLM mediocrity would be an convenient way to raise the average quality of the content. My hope was to stimulate thought and debate; to initiate a conversation, not to provide a conclusion to a debate.
4
JackM
In another comment thread I asked a specific question to understand your underlying moral theory better which enabled you to helpfully elaborate on it. I was then able to conclude I did not align with your moral theory due to the conclusions it led to, and so could discount the conclusions you draw from that theory. My question also lead to a very good, probing question from MichaelStJules which you didn't answer. I found this back and forth very helpful as the specific questions uncovered underlying reasons behind our disagreement. Personally, I hope going forward you respect the LLM's advice and refrain from posting LLM outputs directly, instead opting to use LLM responses to develop your own considered response. I think that makes for a better discussion. Indeed this comment is an example of this as I made use of the LLM response I recently posted.
-4
nathanhb
I am delighted by Michael's comments and intend to reply to them all once I've had the chance to carefully examine and consider his linked materials. Overall, I feel quite disappointed in this comment thread for being in what I would call an "activist" mindset, where the correctness of one's view is taken for granted, and the focus is on practical details of bringing about change in the world in accordance with this view. I think the question of prioritization of human welfare versus animal welfare should be approached from a "philosopher" mindset. We must determine the meaning and moral weight of suffering in humans and non-humans before we can know how to weigh the causes relative to each other. Michael StJules is one of the few animal welfare advocates I've encountered who is willing to engage on this philosophical level. Here's some quotes from elsewhere in this comment section that I think exemplify what I mean by activist mindset rather than philosopher mindset: (Single line separators indicate the comments were in a thread responding to each other) ---------------------------------------- ---------------------------------------- emre kaplan Disclaimer: I'm funded by EA for animal welfare work. Some thoughts: a. So much of the debate feels like a debate on identities and values. I'd really love to see people nitpicking into technical details of cost-effectiveness estimates instead. ... (Truncated) ---------------------------------------- ---------------------------------------- Ariel Simnegar So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF. I've run into a similar dilemma before, where I'm trying to convince non-EAs to direct some of their charity to AMF rather than their favorite local charity. I believe animal welfare charities are orders of magnitude more cost-effective than AMF, so it's probably

As an aside, I don't think someone writing an "activist" comment disqualifies them from being truthseeking.

I used to find it absurd to think one could justify spending on animals when they could be spending on humans. Over years, I changed my mind, between discussing consciousness and moral weights with others, reading many relevant writings, and watching relevant documentaries. I wrote a post explaining why I changed my mind, and engaged extensively with hundreds of comments.

So far, nobody has posed an argument for prioritizing global health over animal welfare which I've found convincing. If the case for animal welfare is indeed correct, then marginal global health funding could be doing orders of magnitude more good if instead allocated to animal welfare. I don't think it means I have bad epistemics, or that my writings aren't worth engaging with, if my actions are following the logical conclusions of my changed beliefs.

If global health is indeed better at the margin than animal welfare, then I would love to know, because that would mean I've been causing enormous harm by allocating my time and donations to preventing us from reducing more suffering. I strive to remain as open-m... (read more)

7
nathanhb
I agree that "activist" comments don't imply that someone isn't truthseeking. I think that whether an activist mindset or a philosophical mindset should be brought to bear on a given problem is highly context dependent. I was trying to make the point that I was disappointed that the responses to this question of cause prioritization (human welfare vs animal welfare) seemed to be predominantly activist mindset oriented. To me, it seems this question is a context that, at the very least, requires a balance of philosophy and activism, if not predominantly philosophy. This interpretation is, I think, supported by this question being asked in the context of a "debate week", where the implied goal is for us to explain our viewpoints and attempt to resolve our differences in worldviews. An example of a question where I would be disappointed to see predominantly philosophical debate instead of activist planning would be: "Given the assumption that there is a 1:1e6 moral value tradeoff for cows to shrimp, and how best should we allocate a budget of 1 million dollars between this set of existing charities: (list of charities)." To respond to a question like that with philosophical debate of the premise would seem off-topic to me. The question specifies a premise, and if you want to fight the hypothesis you ought to initiate an entirely separate conversation. In your specific case, Ariel, I'd like to thank you for your above comment explaining your philosophical journey and giving links to sources you found influential. This is exactly the sort of comment I would like to see in a conversation like this. I will take the time to read what you have linked, and think carefully about it, then get back to you on where your info has changed my mind and where I might still disagree.
4
CB🔸
Very good answer. I have also followed the same path, from donating to the AMF to switching to supporting animal welfare works since it helps more beings. 
3
nathanhb
Are there specific sources or arguments which you recall as being the key influences in you changing your mind?
4
CB🔸
Good question.  I think the first step was learning more about the terrible ways animals. I read "Eating Animals" when I was 18 which informed me of this. I really liked his approach of "food and tradition are important for me, but this is an important enough topic that I should dig into it". This didn't trigger many donations or any "activism" but it made me go vegetarian. At some point I was eating shrimp in a meal and for some reason I visualised the shrimp in my mind, going about its life in the sea. And I was like "I don't want to kill them. If they were in front of me I wouldn't kill them". Fast forward a few years, I was donating to both the AMF and some animal charities - basically doing the default stuff regarding EA donating. But I spent a lot of time comparing different cause areas between them. And I could see that the number to save a human life was super high (5000 per life), and the number to save an animal and spare them a life of torture was dirt cheap (less than 1 dollar). So naturally, since my goal is to help the largest number of beings, I redirected my efforts and money toward animals. I also changed the topics I worked on (my main topic was mostly environmental stuff). I started with supporting standard cage-free commitment, but completed that by gradually helping more neglected and numerous animals (e.g. donating to the shrimp welfare project), because I didn't find a good enough reason saying that smaller animals do not matter as much, beyond our basic "this feels weird" bias. Sure there's a possibility they're not sentient, but I simply don't see why evolution wouldn't have implemented a mechanism as useful as pain in other beings. We have millions of years of common evolutionary history, and behavioural evidence clearly indicate pain and panic when animals are attacked.  I still updated downward towards Rethink priorities's moral weight because they did much more research than me on that. The basic argument is pretty simple : animals
4
JackM
There are plenty of in-depth discussions on the topic of moral weights. But it seems your preferred moral theory is contractualism which I understand leaves the question of moral weights somewhat redundant.  There was this post on contractualism arguing it leads to global health beating animal welfare. The problem for you is that many are attracted to EA precisely because of impartiality and so have already decided they don't like contractualism and its conclusions. Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA. If you want to argue what we should do under a contractualist moral theory you can do it here, you just might not get as much engagement as on other philosophy-related forums as a lot of people here have already decided they are consequentialist (often after deep reflection). I'm personally happy to discuss underlying moral theories. This is why I'm looking forward to your answer to MichaelStJules' question which points out your contractualist theory may lead to special moral concern for, as he puts it, "fetuses, embryos, zygotes and even uncombined sperm cells and eggs". This would then have a whole host of strongly pro-life and pro-natalist implications.
3
MichaelStJules
FWIW, this is a consequence of non-aggregation. You can have a fully aggregative or even additive contractualist view, and it would not have this implication. It could be basically utilitarian with respect to moral agents (and excluding conscious beings who aren't also moral agents). But contractualism is usually not aggregative, AFAIK.
-9
nathanhb

the scale/degree of suffering is much higher

healthy humans take care of their animals

4
Raph
Countries which had their developpement level increase have increased their exploitation of non-human animals. See for example the explosion of factory farming in Chinaoverthe last decades. To me, your statement is simply false, unless we were only talking pets, but that would be silly since they are in such a minority. Moreover, I'd argue that the reverse is correct: making progress regarding the animal exploitation would benefit hugely human beings for several independant reasons: * Climate change: The animal exploitation is one of the main driver of climate change. * Health: Animal agriculture constitutes a huge risk of catastrophic pandemy. * Famine:  Humanity is raising and killing 90+ billion land animals each year, yet there is somehow still human people starving eventhough they are only 7 billion. For one example, it is estimated that "replacing all animal-based items in the U.S. diet with nutritionally equivalent plant-based alternatives would free enough land to feed an additional 350 million people"

Since animal welfare is highly related to the reality of human health like that of diet and pathogenic diseases, animal welfare is an important issue to tackle with.

Human life is more valuable to me than that of an animal

Spending money on animal welfare (specifically veganism) will automatically benefit the global health of humans, and will spare the horrible lives of millions of animals

7
OscarD🔸
This would be nice if it is the case, but it makes me think of https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence. I think we can care about animal welfare without regard to the (probably small) flow through effects to human health.

Through the ancient art of "multiplication" we can see how important it is to stop the current situation where trillions are being tortured to death.

I have a strong prior for people being much, much more important than animals.

3
veggiechips
I, too, have a strong intuitive sense that human lives are, on average, much more valuable than animal lives, yet I strongly agree with the proposition. In fact, I think most people would agree with that prior, including those who strongly agree. Let me pose a few questions to examine this view more deeply: 1. Is there a specific trait or set of traits that humans possess which animals lack that grounds our belief that humans are more important? Is it intelligence, self-awareness, ability to suffer, something else? And do all humans have those traits to a greater degree than all animals? 2. Even if we believe each individual human life is more valuable than an individual animal life, could there be a number of animals whose collective suffering would outweigh a human's? Is there a ratio where the sheer scale of animal suffering would compel us to prioritize it? 10 animals, 100, 10,000? 3. If we faced a situation where we could spend $100 to give 10 people a slight positive boost to their well-being or to eliminate extreme suffering for 1,000 animals, would our prior that humans are more important still lead us to help the 10 humans over the 1,000 suffering animals? My overall point is this: even with a strong prior that humans are more valuable, if we zoom out and look at the metrics of scale, neglectedness, and tractability, there are still compelling reasons to allocate more resources to animal welfare.

I consider AI safety to be the primary overwhelming cause area in Global Health and Animal Welfare. But I think that even considering saving the lives of all human and animals doesn't begin to recognize the scope of the issue. This is about extinction. So you must also consider the moral weight of all the future lives lost. I think also that considering life from a hedonistic standpoint of enjoyment/suffering, as if it could sum to a total and thus judge the life worthwhile or not by the total is fundamentally incorrect. I think it's super weird that so ma... (read more)

3
MichaelStJules
  I'm pretty sympathetic to your view here[1] and preference- and desire-based theories generally. But I'm also skeptical that these dramatically favour humans over nonhuman animals, to the point that global health beats animal welfare. 1. I suspect the cognitive versions of preferences and desires are not actually interpersonally comparable in general, with utilitarian preferences vs deontologist preferences as a special case. They may also exist in simple forms in other animals, and I give that non-negligible probability. There may be no fact that points to humans mattering more (or other animals mattering more than humans). We may just need to normalize or use Pareto, say. See my posts Types of subjective welfare, Which animals realize which types of subjective welfare? and Solution to the two envelopes problem for moral weights. 2. I think many other animals have access to things like love and achievement, e.g. animals who raise their own offspring. Here's a nice illustration from Peter Godfrey-Smith's recent 80,000 Hours podcast episode:   1. ^ For keeping people alive, not bringing them into existence, given my person-affecting intuitions.
3
nathanhb
I agree that there are difficult unresolved philosophical questions in regards to hypothetical not-yet-extant people who are varyingly likely to exist depending on the actions of currently extant people (which may be a group that includes blastocysts, for instance). In regards to non-human animals, and digital entities, I think we need to lean more heavily into computational functionalism (as the video you shared discussed). This point too, is up for debate, but I personally feel much more confident about supporting computational functionalism than biological chauvinism. In the case of complex-brained animals (e.g. parrots), I do think that there is something importantly distinct about them as compared to simple-brained animals (e.g. invertebrates). Some invertebrates do tend to their young, even potentially sacrificing their own lives on behalf of their brood. See: https://entomologytoday.org/2018/05/11/research-confirms-insect-moms-are-the-best/ I think that in order to differentiate the underlying qualia associated with this behavior in insects versus the qualia experienced by the parrots defending their young, we must turn to neuroscience. In a bird or mammal neuroscience is able to offer evidence of the computations of specific sets of neurons carrying out computations such as self-modeling and other-modeling, and things like fondness or dislike of specific other modelled agents. In insects (and shrimp, jellyfish, etc), neuroscience can show us that the insect brains consistently lack sets of neurons which could plausibly be carrying out such complex self/other social modeling. Insect brains have various sets of neurons for sensory processing, for motor control, and other such basic functions. Recently, we have made a comprehensive map of every neuron and nearly all their associated synapses in the preserved brain of an individual fruit fly. We can analyze this entire connectome and label the specific functions of every neuron. I recently attended a talk b
2
JackM
I would personally say no unless the moments of glory help others sufficiently to offset the negative experiences of the life in question. In other words, I am a hedonist and I suspect a lot of others in this thread are too.
-7
nathanhb
-7
nathanhb

Animal Welfare is so neglected... it is just mind-blowing. 

Animals ought not be fungible to humans!

6
Robi Rahman
Humans are just lexically worth more than animals? You would torture a million puppies for a century to protect me from stubbing my toe?
1
Raph
bruh

In analyzing the $100 Million Dilemma—whether to prioritize saving human lives or endangered species—a more profound conceptual framework can be developed by integrating several underexplored dimensions that transcend the typical ethical and ecological perspectives.

1. Ecological Economics of Sustainability vs. Externalities of Anthropocentrism 

A key tension in this debate stems from the difference between immediate, human-centered interventions and systemic, ecosystem-centered conservation. The decision is framed as a zero-sum choice, when in reality,... (read more)

[comment deleted]4
2
1
[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities