I'm a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.
I would hope that in a community committed to impartiality, one need not have to make the case for why it’s worth caring about the welfare of beings that happen not to be members of our species
I think EA's cause prioritization would look very different if it genuinely were a "community committed to impartiality" regarding species. Under impartiality, both of these interventions are on the order of 1000x as cost-effective as GiveWell top charities.[1] (One could avoid this conclusion by believing pleasure/pain only account for on the order of 0.1% of welfare, but this is a deeply unusual view and is empirically dubious.[2]) Open Phil (OP) has recognized this since 2016.[3]
However, to this day, OP has only allocated 17% of its annual neartermist funding to animal welfare.[4] If OP really believes animal welfare is ~1000x as cost-effective as GiveWell top charities, it's difficult to understand how this allocation of funding could possibly be morally justified. Yes, many caveats could be made:
Why doesn't OP allocate a majority of neartermist funding to animal welfare? I don't know. My guess is that key decisionmakers aren't "committed to impartiality" regarding species. Holden Karnofsky has said as much: "My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern."[6]
(Meme by me)
So, what to do? For one, it would be extremely helpful for OP to clarify their views on the questions relevant to animal welfare (how much of welfare is explained by hedonism, should one be impartial regarding species), what the cruxes are that would change their minds regarding cause prioritization, and the counterpoints which explain why they haven't changed their minds. (I'll be publishing a post within the next few months with the above arguments.)
I wish you were right that EA is a "community committed to impartiality" regarding species. However, empirically, it seems that's not the case.
Vasco Grilo (2023). "Prioritising animal welfare over global health and development?" https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and
Severe pain, such as cluster headaches, is associated with a greatly increased suicidality. Lee et al (2019). "Increased suicidality in patients with cluster headache". https://pubmed.ncbi.nlm.nih.gov/31018651/
"If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x)." Holden Karnofsky (2017). "Worldview Diversification". https://www.openphilanthropy.org/research/worldview-diversification/
Ariel Simnegar (2023). "Open Phil Grants Analysis". https://github.com/ariel-simnegar/open-phil-grants-analysis/blob/main/open_phil_grants_analysis.ipynb
Open Philanthropy. "Rethink Priorities — Moral Patienthood and Moral Weight Research". https://www.openphilanthropy.org/grants/rethink-priorities-moral-patienthood-and-moral-weight-research/
Holden Karnofsky (2017). "Radical Empathy". https://www.openphilanthropy.org/research/radical-empathy/
Thanks for the response! I think the care required to be a "morally safe" meat eater would have to be very scrupulous indeed. Effectively, one would have to be vegan when eating food bought by others, unless they are confident that the buyer shares their philosophy of scrupulously verifying humane raising and slaughter.
I scrupulously kept kosher during my childhood and adolescence, which seems to require a similar level of effort. I almost never ate out, except at the single-digit restaurants in my town which were certified kosher. At baseball games, I had drinks but not food. I didn't eat any meals prepared at my non-religious or non-Jewish friends' houses, unless it was obviously raw (like a carrot) or in kosher packaging (like kosher snacks).
Let me tell you, that was a lot of work! Even though veganism is much more restrictive, I actually find it far easier to keep, since it's relatively easily verifiable and communicable.
Nice post! I'd encourage avoiding insect-based protein even if it becomes more available.
But entomophagy is not necessarily more humane than factory farming of livestock all things considered, and along some dimensions it's actually worse, because it involves killing vastly more animals per unit of protein.
https://reducing-suffering.org/why-i-dont-support-eating-insects/
What I've rarely or never seen are anecdotes from "reluctant vegans" - people who, despite hating vegan food, not particularly feeling passionate about veganism, not having vegan friends, and missing on the easy sharing of meat-based meals with friends and family, nevertheless have made a principled choice to be vegan over the long-term purely on the grounds that it's a morally safe choice. If I did see such anecdotes, I think that understanding why and how they made the switch might be helpful in making the switch myself.
This largely applies to me.
When I went vegan, I wasn't well-versed in moral philosophy. I was familiar with the analogous debate around abortion. Abortion opposers often argue from marginal cases that there's no consistent dividing line between fetuses and born babies, and I considered how similar arguments from marginal cases would imply that we shouldn't kill animals for our own pleasure.
After being vegan for over a year, I stumbled across Matt Adelstein's article about how factory farming is the greatest atrocity in history. Somehow, I'd just never learned about factory farming prior to this. Veganism went from the "morally safe choice" to the overwhelmingly morally mandatory one. That, I think, is the difference between our approaches to veganism.
To me, calling veganism the "morally safe choice" is like if we have two choices: (a) burn a hundred pigs to death because it's fun to watch them running around screaming, or (b) not do that, and calling (b) the "morally safe choice". On the contrary, (a) is the choice only a psychopath would take, and (b) is the choice any person with a drop of morality or consistency would take, if they truly understood what was at stake.
I think there's a big difference between strong longtermism (the argument you state) and my comment's argument that FEM's intervention is net negative.
My comment argues that while FEM's intentions are well-meaning, their intervention may be net negative because it prevents people from experiencing lives they would have been glad to have lived. For my comment's argument to be plausible, all one needs to believe is that the loves and friendships future people may have is a positive good. Yes, my comment appeals to longtermism's endorsement of this view, but its claims and requirements are far more modest than those of strong longtermism.
There is no double standard or singling out here. I think global health work is good, and support funding for it on the margin. I believe the same about animal welfare, and about longtermism. Yes, some interventions are more cost-effective than others, and I think broadly similar arguments (e.g. even if you think animals don't matter, a small chance that they do matter should be enough to prioritize animal welfare over global health due to animal welfare's scale and neglectedness) do indeed go through.
If you provided me another example of a neartermist intervention which prevents people from experiencing lives they would have been glad to have lived, I would make the same argument against it as in my earlier comment. It could be family planning, or it could be something else (e.g. advocacy of a one-child policy, perhaps for environmentalist purposes).
I'm also quite sympathetic to the pure philosophical case for strong longtermism, though I have some caveats in practice. So yes, I don't think your statement of strong longtermism is unreasonable.
This is a good critique of MEC. Thanks for spelling it out, as I've never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascal's mugging.
I could play the game with the "humans matter infinitely more than animals" person by saying "well, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humans". Of course, they could then say, "no, my lexicographic position of humanity is one degree greater than yours", and so on.
This reminds me of Gödel's Incompleteness Theorem, where you can't just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. There's no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a "powerful" system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which "exploit" MEC in a way that other reconciliations aren't (as) susceptible to.
I think the conclusion should instead be that we should take the impact of neartermist interventions on the experiences of future beings very seriously.
It's not necessary to endorse total utilitarianism or strong longtermism for my comment's argument to go through. If you see the loves and friendships future people may have as a positive good, even if they may not exist yet, and even if you don't weigh them as highly as those of people living in the present, then I think you should carefully consider what my comment has to say.
When people feel like they have to choose between a cherished belief and a philosophical argument, their instinct is often to keep the cherished belief and dismiss the philosophical argument. It's entirely understandable that people do that! It takes strength to listen to one's beliefs being questioned, and it takes courage to really deeply probe at whether or not one's cherished belief is actually true. However:
What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.
Eugene T. Gendlin, Focusing (Bantam Books, 1982).[1]
Quoted by Eliezer Yudkowsky in "Avoiding Your Belief's Real Weak Points".
Your statements about PAV make sense. I typically think about PAV as you wrote:
A person-affecting view could ground value by using a total view-compatible welfare scale and then just restricting its use in a person-affecting way
But there could be other conceptions. Somewhat tangentially, I'm deeply suspicious of views which don't allow comparison to other views, which I see as a handwave to avoid having to engage critically with alternative perspectives.
If I'm talking to a person who doesn't care about animals, and I try to persuade them using moral uncertainty, and they say "no, but one human is worth infinity animals, so I can just ignore whatever magnitude of animal suffering you throw at me", and they're unwilling to actually quantify their scales and critically discuss what could change their mind, that's evidence that they're engaging in motivated reasoning.
As a result, I hold very low credence in views which don't admit some approach to intertheoretic comparison. I haven't spent much time thinking about which approach to resolving moral uncertainty is the best, but MEC has always seemed to me to be a clear default, as with maximizing EV in everyday decisionmaking. As with maximizing EV, MEC can also be fairly accused of fanaticism, which is a legitimate concern.
On neutrality, I've always considered the intuition of neutrality to be approximately lumpable with PAV, so please let me know if I'm just wrong there. From what I recall, Chapter 8 of What we Owe the Future argues strenuously against both the intuition of neutrality and PAV, and when I was reading it, I didn't detect much of a difference between MacAskill's treatment of the two.
Thanks for these caveats! I largely agree, but they seem to only have a modest impact on the 99% claim.
Regarding intertheoretic comparison, my prior is that a person-affecting view (PAV) should have little to no effect on one's valuation of welfare. I don't really see why PAV vs non-PAV would radically disagree on how important it is to help others. In this case, the disagreement would indeed have to be radical--even if for some reason, PAV caused someone to 10x their valuation of welfare, they'd still have to be 90% certain PAV was true for FEM to be positive.
For PAVs where value is grounded quite differently, I don't have an informed prior on just how different the PAV's grounding of value may be. If there are highly supported PAVs where welfare is clearly valued far greater than non-PAV, then that would update the 99% claim. However, I don't know of any such PAV, nor of any non-PAV where welfare is valued far greater than PAV (which would have the opposite effect).
Your second consideration makes sense, and might result in a modest dampening effect on the 99% number, if the increase in mothers' standard of living due to FEM's intervention is highly weighed.
Couldn't agree more on the farmed and wild animal effects :) I won't pretend to have any degree of certainty about how it all shakes out.
Agreed. I'm planning on writing up a post about it, but I'm very busy and I'd like the post to be extremely rigorous and address all possible objections, so it probably won't be published for a month or two.