Bob Fischer

Senior Researcher @ Rethink Priorities
3799 karmaJoined Working (15+ years)Rochester, NY, USAbobfischer.net

Bio

I'm a Senior Researcher for Rethink Priorities, a Professor of Philosophy at Texas State University, a Director of the Animal Welfare Economics Working Group, the Treasurer for the Insect Welfare Research Society, and the President of the Arthropoda Foundation. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.

Sequences
3

Rethink Priorities' CRAFT Sequence
The CURVE Sequence
The Moral Weight Project Sequence

Comments
105

Hi Josh. There are two issues here: (a) the indirect effects of helping humans (to include the potential that humans have to make a positive impact) and (b) the positive portion of human and animals' welfare ranges. We definitely address (b), in that we assume that every individual with a welfare range has a positive dimension of that welfare range. And we don't ignore that in cost-effectiveness analysis, as the main benefit of saving human lives is allowing/creating positive welfare. (So, averting DALYs is equivalent to allowing/creating positive welfare, at least in terms of the consequences.)

We don't say anything about (a), but that was beyond the scope of our project. I'm still unsure how to think about the net indirect effects of helping humans, though my tendency is to think that they're positive, despite worries about the meat-eater problem, impacts on wild animals, etc. (Obviously, the direct effects are positive!) Others, however, probably have much more thoughtful takes to give you on that particular issue.

Thanks, Nick, both for your very kind words about our work and for raising these points. I’ll offer just a few thoughts.

You raise some meta-issues and some first-order issues. However, I think the crux here is about how to understand what we did. Here’s something I wrote for a post that will come out next week:

Why did a project about “moral weight” focus on differences in capacity for welfare? Very roughly, a moral weight is the adjustment that ought to be applied to the estimated impact of an animal-focused intervention to make it comparable to the estimated impact of some human-focused intervention. Given certain (controversial) assumptions, differences in capacity for welfare just are moral weights. But in themselves, they’re something more modest: they’re estimates of how well and badly an animal’s life can go relative to a human’s. And if we assume hedonism—as we did—then they’re something more modest still: they’re estimates of how intense an animal’s valenced states can be relative to a human’s. The headline result of the Moral Weight Project was something like: “While humans and animals differ in lots of interesting ways, many of the animals we farm can probably have pains that aren’t that much less intense than the ones humans can have.”

I don’t think you’ve said anything that should cause someone to question that headline result. To do that, we’d want some reason to think that a different research team would conclude that chickens feel pain much less intensely than humans, some reason to think that neuron counts are good proxies for the possible intensities of pain states across species, or some principled way of discounting behavioral proxies (which we should want, as we otherwise risk allowing our biases to run wild). In other words, we’d want more on the first-order issues.

To be fair, you’re quite clear about this. You write:

I present four critical junctures where I think the Moral Weights project favored animals. I don’t argue that any of their decisions are necessarily wrong, only that each decision shifts the project outcome in an animal-friendly direction and sometimes by at least an order of magnitude.

But the ultimate question is whether our decisions were wrong, not whether they can be construed as animal-friendly. That’s why the first-order issues are so important. So, for instance, if we should have given more weight to neuron counts, so be it: let’s figure out why that would be the case and what the weight should be. (That being said, we could up the emphasis on neuron counts considerably without much impact on the results. Animal/human neuron counts ratios aren’t vanishingly low. So, even if they determined a large portion of the overall estimates, we wouldn’t get differences of the kind you’ve suggested. In fact, you could assign 20% of your credence to the hypothesis that animals have welfare ranges of zero: that still wouldn’t cut our estimates by 10x.)

All that said, you might contest that the headline result is what I’ve suggested. In fact, people on the Forum are using our numbers as moral weights, as they accept (implicitly or explicitly) the normative assumptions that make moral weights equivalent to estimates of differences in the possible intensities of valenced states. If you reject those assumptions, then you definitely shouldn’t use our numbers as moral weights. That being said, if you think that hedonic goods and bads are one component of welfare, then you should use our numbers as a baseline and adjust them. So, on one level, I think you’re operating in the right way: I appreciate the attempt to generate new estimates based on ours. However, that too requires a bunch of first-order work, which we took up when we tried to figure out the impact of assuming hedonism. You might disagree with the argument there. But if so, let’s figure out where the argument goes wrong.

One final point. I agree—and have always said—that our numbers are provisional estimates that I fully expect to revise over time. We should not take them as the last word. However, the way to make progress is to engage with hard philosophical, methodological, and empirical problems. What’s a moral weight in the first place? Should we be strict welfarists when estimating the cost-effectiveness of different interventions? How should we handle major gaps in the empirical literature? Is it reasonable to interpret the results of cognitive biases as evidence of valenced states? How much weight should we place on our priors when estimating the moral importance of members of other species? And so on. I’m all for doing that work.

I'm encouraged by your principles-first focus, Zach, and I'm glad you're at the helm of CEA. Thanks for all you're doing. 

Thanks for your question, Nathan. We were making programmatic remarks and there's obviously a lot to be said to defend those claims in any detail. Moreover, we don't mean to endorse every claim in any of the articles we linked. However, we do think that the worries we mentioned are reasonable ones to have; lots of EAs can probably think of their own examples of people engaging in motivated reasoning or being wary about what evidence they share for social reasons. So, we hope that's enough to motivate the general thought that we should take uncertainty seriously in our modeling and deliberations.

Thanks, Deborah. Derek Shiller offered an answer to your question here.

Good question! Re: the Moral Weight Project, perhaps the biggest area of impact has been on animal welfare economics, where having a method to make interspecies comparisons is crucial for benefit-cost analysis. Many individuals and organizations have also reported to us that our work was an update on the importance of animals and on invertebrates specifically. We’ve seen something similar with the CCM tool, with results ranging from positive feedback and enthusiasm to more concrete updates in their decisions. There’s more we can say privately than publicly, however, so please feel free to get in touch if you’d like to chat! 

  • What are selfish lifestyle reasons to work on the WIT team?
    • It’s fun to talk to smart people! Remote work is great. It’s a privilege to be able to think about big problems that are both philosophically complicated and practically important. 
  • Is it fair to say the work WIT does is unusual outside of academia? What are closely related organizations that tackle similar problems?
    • Yes, what we do is very unusual outside of academia—and inside it too. Re: other groups that do global priorities research, the most prominent ones are GPI, PWI, and the cause prio teams at OP.
  • How does your team define "good enough" for a sequence? What adjustments do you make when you fall behind schedule? Cutting individual posts? Shortening posts? Spending more time?
    • That’s a hard one and we’re still trying to figure it out. There are a lot of variables here, many of which are linked to whether we have the funding to linger on a particular project. In general, however, our job isn’t to produce academic research: it’s to inform decisions. So, if we think we’ve done enough to help people who need to make decisions, then that’s a good sign that we should wrap up the project soon.
  • How much does the direction of a sequence change as you're writing it? It seems like you have a vision in mind when starting out, but you also mention being surprised by some results.
    • The general structure tends not to change much—we plan out posts together and have a general sense of the research we want to do—but the narrative certainly evolves as we learn more about the topic we’re investigating. The conclusions definitely aren’t set from the beginning!
  • Can you tell us more about the structure of research meetings? How frequently do individual authors chat with each other and for what reason? In particular, the CURVE sequence feels very intentionally like a celebration of different "EA methodologies". Most of the posts feel individual before converging on a big cost-effectiveness analysis.
    • We’re in touch all the time, brainstorming new ideas, reviewing drafts, and figuring out solutions to problems. The whole team meets once or twice a week and then we individually hop on 1-1 calls more frequently to discuss specific aspects of our projects. Most of the research still has a lead who’s driving it forward, but everyone’s fingerprints tend to be on everything. 
  • Much of your work feels numerical simulation over discrete choices. Have there been attempts to define "closed-form" analytical equations for your work? What are reasons to allocate resources to this versus not?
    • This ties to your previous question “How does your team define "good enough" for a sequence?”. We think analytical equations can be valuable (they are often tidier, speed up computational work, and can provide clearer insights into sensitivity analysis). For example, it’s a natural next step in our human extinction post, which we flagged in the conclusion. And indeed we’ve done some work towards this already but not polished it enough for it to be in a shareable state. Back to your question “when is a piece of research good enough to wrap up?” We don’t know for sure, but we’ve found that running computational simulations that we’re sufficiently confident in gives us approximations that are perfectly suitable to learn about the models we’re interested in. We hear you, closed-form solutions are mathematically satisfying. But, once we’ve learned the main headlines, it’s hard to justify spending the extra time working through closed-form solutions for everything, especially for some of the more complex models with several moving parts.
  • What are the main constraints the WIT team faces?
    • The standard ones: we’re funding- and capacity-constrained. We could do a lot more with additional resources!

Great (and difficult!) question, Jordan. I (Bob) am responding to this one for myself and not for the team; others can chime in as they see fit. The biggest issue I see in EA cause prioritization is overconfidence. It’s easy to think that because there are some prominent arguments for expected value maximization, we don’t need to run the numbers to see what happens if we have a modest level of risk aversion. It’s easy to think that because the future could be long and positive, the EV calculation is going to favor x-risk work. Etc. I’m not anti-EV; I’m not anti-x-risk. However, I think these are clear areas where people have been too quick to assume that they don’t need to run the numbers because it's obvious how they'll come out.

I’m a “chickens and children” EA, having come to the movement through Singer’s arguments about animals and global poverty. I still find EA most compelling both philosophically and emotionally when it focuses on areas where it’s clear that we can make a difference. However, the more I grapple with the many uncertainties associated with resource allocation, the more sympathetic I become to diversification, to include significant resources for work that doesn’t appeal to me at all personally. So you probably won’t catch me pivoting to AI governance anytime soon, but I’m glad others are doing it. 

Load more