LD

Laura Duffy

Researcher @ Rethink Priorities
765 karmaJoined Working (0-5 years)Washington, DC, USA

Bio

I am a Researcher at Rethink Priorities, working mostly on cross-cause prioritization and worldview investigations. I am passionate about farmed animal welfare, global development, and economic growth/progress studies. Previously, I worked in U.S. budget and tax policy as a policy analyst for the Progressive Policy Institute. I earned a B.S. in Statistics from the University of Chicago, where I volunteered as a co-facilitator for UChicago EA's Introductory Fellowship. 

Comments
17

Seconding this question, and wanted to ask more broadly: 

A big component/assumption of the example given is that we can "re-run" simulations of the world in which different combinations of actors were present to contribute, but this seems hard in practice. Do you know of any examples where Shapley values have been used in the "real world" and how they've tackled this question of how to evaluate counterfactual worlds?

(Also, great post! I've been meaning to learn about Shapley values for a while, and this intuitive example has proven very helpful!)

Hi Michael, here are some additional answers to your questions: 

1. I roughly calibrated the reasonable risk aversion levels based on my own intuition and using a Twitter poll I did a few months ago: https://x.com/Laura_k_Duffy/status/1696180330997141710?s=20. A significant number (about a third of those who are risk averse) of people would only take the bet to save 1000 lives vs. 10 for certain if the chance of saving 1000 was over 5%. I judged this a reasonable cut-off for the moderate risk aversion level. 

4. The reason the hen welfare interventions are much better than the shrimp stunning intervention is that shrimp harvest and slaughter don't last very long. So, the chronic welfare threats that ammonia concentrations battery cages impose on shrimp and hens, respectively, outweigh the shorter-duration welfare threats of harvest and slaughter.

The number of animals for black soldier flies is low, I agree. We are currently using estimates of current populations, and this estimate is probably much lower than population sizes in the future. We're only somewhat confident in the shrimp and hens estimates, and pretty uncertain about the others. Thus, I think one should feel very much at liberty to plug in different numbers for population sizes for animals like black soldier flies.

More broadly, I think this result is likely a limitation of models based on total population size, versus models that are based more on the number of animals affected per campaign. Ideally, as we gather more information about these types of interventions, we could assess the cost-effectiveness using better estimates of the number of animals affected per campaign. 

Thanks for the thorough questions!
 

Hi Sylvester, thanks for sharing that post, I hadn't seen it! 

Hey, thanks for this detailed reply! 
When I said "practical", I more meant "simple things that people can do without needing to download and work directly with the code for the welfare ranges." In this sense, I don't entirely agree that your solution is the most workable of them (assuming independence probably would be). But I agree--pairwise sampling is the best method if you have the access and ability to manipulate the code! (I also think that the perfect correlation you graphed makes the second suggestion probably worse than just assuming perfect independence, so thanks!)

Hi Kyle, 

This is a very interesting post! One quick and very small technical detail: Rethink Priorities' welfare ranges aren't capped at 1 for non-human animals. (It just happens that, when we adjusted for probability of sentience, they all happened to have 50th percentile estimates that fall below 1). They're instead a reflection of the difference between the best and worst states that the non-human animal can experience relative to the difference between the best and worst states that a human can experience (which is normalized to 1). In theory, this relative difference could be greater than 1 if the range in intensity of experiences that a non-human animal can experience is wider than that of humans. 

In fact, one of our welfare range models (the undiluted experiences mode) that feeds into the aggregate estimates tends to produce sentience-adjusted welfare range estimates greater than 1 under the theory that less cognitively complex organisms may not be able to dampen negative experiences by contextualizing them. As such, a few animals have 95th percentile estimates for their welfare ranges that are above 1 (octopuses, pigs, and shrimp). Here are some more details about the models and distributions: https://docs.google.com/document/d/1xUvMKRkEOJQcc6V7VJqcLLGAJ2SsdZno0jTIUb61D8k/edit?usp=sharing As well as the spreadsheet of results from all models: https://docs.google.com/spreadsheets/d/1SpbrcfmBoC50PTxlizF5HzBIq4p-17m3JduYXZCH2Og/edit?usp=sharing 

Again, this is a really thought-provoking and sobering post, thanks for writing it :)

Oh I see! Thanks for the clarification!

This is a really interesting project and way of approaching the topic!

One thing to note: welfare ranges don’t factor in the lifespans of animals, so we’d also need to factor in the typical time a farmed animal lives and then weight by welfare range to get a moral weight-adjusted sense of per calorie animal impacts.

But again, approaching this from a per calorie perspective is really interesting!

Hi Henry! While the 90% confidence intervals for the RP welfare ranges are indeed wide, this is because they’re coming from a mixture of several theories/models of welfare. The uncertainty within a given theory/model of welfare is much lower, and you might have more or less credence in any individual model.

Additionally, if we exclude the neuron count model, the welfare ranges from the mixture of all the other models have narrower distributions.

Here’s a document that explains the different theories/models used: https://docs.google.com/document/d/1xUvMKRkEOJQcc6V7VJqcLLGAJ2SsdZno0jTIUb61D8k/edit

And here’s a spreadsheet with all the confidence intervals from each theory/model individually (after adjusting for probability of sentience): https://docs.google.com/spreadsheets/d/1SpbrcfmBoC50PTxlizF5HzBIq4p-17m3JduYXZCH2Og/edit

Fascinating, I hadn't thought about that with respect to Congress. One thing I wonder about with ag-gag laws is whether they run afoul of the First Amendment. Do you know if there's a strong legal case to be made that they're unconstitutional? 

My gut instinct here would be that it's probably somewhat harder to pass Congressional legislation that both is constitutional and effectively limits corporate campaigns (because it's private entities choosing what kinds of products to sell). Am I wrong here? (I am really interested in this topic, so I would love to be corrected)

Load more