I work as an engineer, donate 10% of my income, and occasionally enjoy doing independent research. I'm most interested in farmed animal welfare and the nitty-gritty details of global health and development work. In 2022, I was a co-winner of the GiveWell Change Our Mind Contest.
I'd suggest this thread (and the linked LW post) as a good overview of the arguments. You could also take a look at the relevant section of the Intro to EA handbook or this post.
In general, I think you'll probably find that you'll get a better response on the forum if you spend some time engaging with the intro materials and come back with specific questions or arguments.
Due to FTX's collapse and EA orgs' successes at growing the number of people interested in EA careers, it appears that that EA causes are trending back toward being funding constrained rather than talent constrained. However, the metrics being tracked on the dashboard seem more relevant to addressing talent and engagement bottlenecks than they do to addressing funding bottlenecks. Do you have thoughts about how CEA can help attract more funding to EA causes (either via existing core projects or via new initiatives), and about what metrics would make sense to track for those efforts?
Thanks for your comment! I agree that there are plenty of options that would be useful other than using the raw welfare ranges or using the welfare ranges with a cap on shrimp at 0.1x humans' level. Here are the results with a cap on both shrimp's and chickens' welfare ranges at 1x humans':
Summary Statistics: Weighted Hours of Disabling-Equivalent Pain Averted Per Dollar Donated to SWP:
5th, 25th, 50th, 75th, 95th percentiles:
[2.234e-04 4.837e+00 6.347e+01 4.592e+02 4.776e+03]
Mean:
1155.78
Summary Statistics: Weighted Hours of Disabling-Equivalent Pain Averted Per Dollar Donated to THL:
5th, 25th, 50th, 75th, 95th percentiles:
[2.301e+01 1.023e+03 2.935e+03 6.537e+03 1.595e+04]
Mean:
4811.45
Breaking out the cost-effectiveness results conditional on each welfare range model (or conditional on including/excluding the undiluted experience model) would be fantastic, but is probably outside the scope of what I have time to do.
I don't really understand your middle paragraph. Can you elaborate on what you mean by "agent-relative reasons?" I do understand the issue whereby which welfare range is taken to be constant can drive the outcome of an EV calculation. But I *think* that only ends up changing the results if your unit for welfare range is based one of the animals in the comparison? I think I'd get identical conclusions if I took fruit flies' welfare range to be the constant instead of humans', it would only change if I used chickens' or shrimp's as the constant. And I'm not trying to take EVs across multiple moral systems, I'm holding the moral system constant and taking EVs across different estimates of chickens' and shrimp's capacities for realizing welfare, which seems like it avoids some further pitfalls.
Something I think is worth mentioning, but that I didn’t get to in the main body of the post, is priors-based critiques of the RP moral weights. I think a lot of people find some of RP’s results (e.g that octopuses have a >5% chance of being able to realize more than 1. 4x the instantaneous welfare of a human) intuitively unreasonable, and might understandably question the overall methodology for producing these kinds of outcomes. RP has responded to these critiques, but I do share some of these concerns; I would be skeptical of an argument for prioritizing animal welfare over interventions that help humans that depended heavily on specific values in the MWP results. However, I am less concerned about these critiques when applied to this specific analysis, since this case is just comparing two animal welfare interventions. Critiques of the RP moral weights applied to this case would need to make an argument for why the relative weights of chicken and shrimp are wrong, not the absolute weights. I think concern about the undiluted experience model is the most salient of these critiques, which is why it was the primary critique discussed in the post.
Of course, one still needs to make the case for why it’s worth caring about animal welfare at all. But I think that case can be made without appealing to the specifics of the MWP results, and is outside the scope of this post regardless.
Thanks for your comment! I broadly agree with the point you’re making, and have amended the summary to capture it. Let me know if you think the updated wording addresses your concern.
I did ponder trying to quantify the potential impacts of catalyzing industry-wide change as a result of this pilot, and I just want to lay out a little bit of why I think that’s so hard. It’s very tempting to compute a massive EV for this by doing a calculation like (small chance that this brings industry-wide adoption forward n years) * (400 billion shrimp farmed/year) = giant number. But I think that’s probably a bad way to look at it. I think the better way to think of it is that industry-wide adoption would take a successful pilot plus other forms of activism like corporate campaigns, ballot initiatives, or legislative lobbying. So the pilot alone isn’t necessarily bringing forward broader adoption, but rather creating new potentially cost-effective opportunities for donations. The exact EV would therefore depend on questions like the relative cost-effectiveness of those new donation opportunities compared to the existing animal welfare portfolio, and on how funding constrained the animal welfare space is expected to be over the next several years.
None of this is to say that I disagree with your point, just that I’m quite uncertain about the indirect cost effectiveness and would struggle to find a way to easily model it.