I'm a Senior Researcher for Rethink Priorities, a Professor of Philosophy at Texas State University, a Director of the Animal Welfare Economics Working Group, the Treasurer for the Insect Welfare Research Society, and the President of the Arthropoda Foundation. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.
Thanks, Nick, both for your very kind words about our work and for raising these points. I’ll offer just a few thoughts.
You raise some meta-issues and some first-order issues. However, I think the crux here is about how to understand what we did. Here’s something I wrote for a post that will come out next week:
Why did a project about “moral weight” focus on differences in capacity for welfare? Very roughly, a moral weight is the adjustment that ought to be applied to the estimated impact of an animal-focused intervention to make it comparable to the estimated impact of some human-focused intervention. Given certain (controversial) assumptions, differences in capacity for welfare just are moral weights. But in themselves, they’re something more modest: they’re estimates of how well and badly an animal’s life can go relative to a human’s. And if we assume hedonism—as we did—then they’re something more modest still: they’re estimates of how intense an animal’s valenced states can be relative to a human’s. The headline result of the Moral Weight Project was something like: “While humans and animals differ in lots of interesting ways, many of the animals we farm can probably have pains that aren’t that much less intense than the ones humans can have.”
I don’t think you’ve said anything that should cause someone to question that headline result. To do that, we’d want some reason to think that a different research team would conclude that chickens feel pain much less intensely than humans, some reason to think that neuron counts are good proxies for the possible intensities of pain states across species, or some principled way of discounting behavioral proxies (which we should want, as we otherwise risk allowing our biases to run wild). In other words, we’d want more on the first-order issues.
To be fair, you’re quite clear about this. You write:
I present four critical junctures where I think the Moral Weights project favored animals. I don’t argue that any of their decisions are necessarily wrong, only that each decision shifts the project outcome in an animal-friendly direction and sometimes by at least an order of magnitude.
But the ultimate question is whether our decisions were wrong, not whether they can be construed as animal-friendly. That’s why the first-order issues are so important. So, for instance, if we should have given more weight to neuron counts, so be it: let’s figure out why that would be the case and what the weight should be. (That being said, we could up the emphasis on neuron counts considerably without much impact on the results. Animal/human neuron counts ratios aren’t vanishingly low. So, even if they determined a large portion of the overall estimates, we wouldn’t get differences of the kind you’ve suggested. In fact, you could assign 20% of your credence to the hypothesis that animals have welfare ranges of zero: that still wouldn’t cut our estimates by 10x.)
All that said, you might contest that the headline result is what I’ve suggested. In fact, people on the Forum are using our numbers as moral weights, as they accept (implicitly or explicitly) the normative assumptions that make moral weights equivalent to estimates of differences in the possible intensities of valenced states. If you reject those assumptions, then you definitely shouldn’t use our numbers as moral weights. That being said, if you think that hedonic goods and bads are one component of welfare, then you should use our numbers as a baseline and adjust them. So, on one level, I think you’re operating in the right way: I appreciate the attempt to generate new estimates based on ours. However, that too requires a bunch of first-order work, which we took up when we tried to figure out the impact of assuming hedonism. You might disagree with the argument there. But if so, let’s figure out where the argument goes wrong.
One final point. I agree—and have always said—that our numbers are provisional estimates that I fully expect to revise over time. We should not take them as the last word. However, the way to make progress is to engage with hard philosophical, methodological, and empirical problems. What’s a moral weight in the first place? Should we be strict welfarists when estimating the cost-effectiveness of different interventions? How should we handle major gaps in the empirical literature? Is it reasonable to interpret the results of cognitive biases as evidence of valenced states? How much weight should we place on our priors when estimating the moral importance of members of other species? And so on. I’m all for doing that work.
Thanks for your question, Nathan. We were making programmatic remarks and there's obviously a lot to be said to defend those claims in any detail. Moreover, we don't mean to endorse every claim in any of the articles we linked. However, we do think that the worries we mentioned are reasonable ones to have; lots of EAs can probably think of their own examples of people engaging in motivated reasoning or being wary about what evidence they share for social reasons. So, we hope that's enough to motivate the general thought that we should take uncertainty seriously in our modeling and deliberations.
Good question! Re: the Moral Weight Project, perhaps the biggest area of impact has been on animal welfare economics, where having a method to make interspecies comparisons is crucial for benefit-cost analysis. Many individuals and organizations have also reported to us that our work was an update on the importance of animals and on invertebrates specifically. We’ve seen something similar with the CCM tool, with results ranging from positive feedback and enthusiasm to more concrete updates in their decisions. There’s more we can say privately than publicly, however, so please feel free to get in touch if you’d like to chat!
Great (and difficult!) question, Jordan. I (Bob) am responding to this one for myself and not for the team; others can chime in as they see fit. The biggest issue I see in EA cause prioritization is overconfidence. It’s easy to think that because there are some prominent arguments for expected value maximization, we don’t need to run the numbers to see what happens if we have a modest level of risk aversion. It’s easy to think that because the future could be long and positive, the EV calculation is going to favor x-risk work. Etc. I’m not anti-EV; I’m not anti-x-risk. However, I think these are clear areas where people have been too quick to assume that they don’t need to run the numbers because it's obvious how they'll come out.
I’m a “chickens and children” EA, having come to the movement through Singer’s arguments about animals and global poverty. I still find EA most compelling both philosophically and emotionally when it focuses on areas where it’s clear that we can make a difference. However, the more I grapple with the many uncertainties associated with resource allocation, the more sympathetic I become to diversification, to include significant resources for work that doesn’t appeal to me at all personally. So you probably won’t catch me pivoting to AI governance anytime soon, but I’m glad others are doing it.
Hi Josh. There are two issues here: (a) the indirect effects of helping humans (to include the potential that humans have to make a positive impact) and (b) the positive portion of human and animals' welfare ranges. We definitely address (b), in that we assume that every individual with a welfare range has a positive dimension of that welfare range. And we don't ignore that in cost-effectiveness analysis, as the main benefit of saving human lives is allowing/creating positive welfare. (So, averting DALYs is equivalent to allowing/creating positive welfare, at least in terms of the consequences.)
We don't say anything about (a), but that was beyond the scope of our project. I'm still unsure how to think about the net indirect effects of helping humans, though my tendency is to think that they're positive, despite worries about the meat-eater problem, impacts on wild animals, etc. (Obviously, the direct effects are positive!) Others, however, probably have much more thoughtful takes to give you on that particular issue.