My worry is that I see you claimed with Jacy that "(iv) That setting out to improve animal welfare (in the short or medium term) seems extremely unlikely to be the best sub-goal to aim for to meet the goal of making the long-term future flourish."
I do find this claim to be plausible, but, to the best of my understanding, I see nowhere in "Human and animal interventions: the long-term view" that you actually defend that claim.
Hence the worry of you asserting more than you have demonstrated, and the source of confusion.
I'm curious if you've considered the conjunction fallacy.
From what I see, there are seven events that could go wrong, for different reasons:
We will never develop the resolve to colonize space We cannot fit everything we need to build a civilization into a spaceship We cannot get the spaceship going fast enough We cannot have enough civilization-building materials remain intact during the voyage We cannot slow the spaceship down when we're close to the target We cannot build the civilization even after arriving at the target for some reason
* Some unknown unknown will go wrong
As you know, even if you claim all seven individual events are unlikely (say 10%), collectively something still could go wrong with probability 52%.
Thoughts?
-
Also, another idea i wanted to ask if you've considered is space cities -- rather than making the long journey to a far flung habitable planet, we just continue to exist in constructed facilities in space, using non-habitable planets for construction materials. Though I haven't thought about it that much...
"In that post, I contrasted human welfare improvements, which have many significant indirect and long-run effects, with animal welfare improvements, which appear not to. That is not to say that interventions which improve animal welfare do not have these large long-run effects, but that the long-run effects of such interventions are enacted via shifts in the views of humans rather than directly via the welfare improvement."
I think I can offer even more insight on what you're saying and why people are confused.
What I believe you're saying, and correct me if I'm wrong, but "work focused primarially on improving the lives of animals today (e.g., THL's talk of 'animals spared') is unlikely to be as high-impact as work focused primarially on improving the lives of humans today (though that also might not be the best cause overall) because humans today have various flow-through effects (e.g., economic development) and animals do not."
I think this is an important conclusion that appears accepted but not widely internalized by many nonhuman-animal-focused EAs.
However, what you actually say are things like "I contrasted human welfare improvements, which have many significant indirect and long-run effects, with nonhuman animal welfare improvements, which appear not to". The term "animal welfare improvements" is ambiguous, though, and does not necessarily refer solely to targeting nonhuman animals in the present.
For example, it's possible that by producing enough vegetarians (e.g., through leafleting) we get a large impact not from sparing nonhuman animals alive today, but produce enough of an anti-speciesist shift to prevent large quantities of nonhuman animal suffering in the far future (c.f., Brian Tomasik's thesis). I don't necessarily agree (or disagree) with this thesis, but you have not yet refuted it.
So when an nonhuman-animal-focused EA comes along and reads this, they conflate their focus on long-run animal goals with your crtique of short-run animal goals and think you're making claims that you're not, and then argue against you for things you may not have said.
Given this, perhaps more clarity could be introduced by clarifying the short-run nature of what you're discussing, by explicitly using the term "short-run" and/or providing concrete examples?
I'm really glad the Global Priorities Project exists and I look forward to seeing more research. I think this piece was also particularly well-written in a very accessible yet academic voice.
That being said, I'm not sure the intention of this piece, but it feels neither novel nor thorough. I'm excited that my calculator is linked in this piece, but to clarify I no longer hold the view that those cost-effectiveness estimates are to be taken as the end-all of the impact, and I don't think any EAs still do.
Furthermore, many people now argue that the impact of working on animals is to have a long-term gestalt shift in the view to help not humans, but rather future animals. Ending factory farming, for example, would have a large compounding effect on all future animals that are no longer factory farmed, toward the future, and attitude change is the only way to make that happen.
Likewise, some people (though I'm unsure) think that spreading anti-speciesism might be a critical gateway toward helping people expand their moral concern to wild animals or computer programs (e.g., suffering subroutines) in the far future too.
It's not just that this piece doesn't address this fact, but it seems to ignore the possibility entirely by focusing (somewhat dogmatically) on humans.
Both those claims make sense, and I agree you have demonstrated them, but I could see them being easily misinterpreted based on what I said in the beginning.