I do research at Longview Philanthropy. Previously I was a Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.
I also do a podcast about EA called Hear This Idea.
Just I want to register the worry that the way you've operationalised “EA priority” might not line up with a natural reading of the question.
The footnote on “EA priority” says:
By “EA priority” I mean that 5% of (unrestricted, i.e. open to EA-style cause prioritisation) talent and 5% of (unrestricted, i.e. open to EA-style cause prioritisation) funding should be allocated to this cause.
This is a bit ambiguous (in particular, over what timescale), but if it means something like “over the next year” then that would mean finding ways to spend ≈$10 million on AI welfare by the end of 2025, which you might think is just practically very hard to do even if you thought that more work on current margins is highly valuable. Similar things could have been said for e.g. pandemic prevention or AI governance in the early days!
Nice post! Copying the comment I left on the draft (edited for clarity) —
I agree with both conclusions, but I don't think your argument is the strongest reason to buy those conclusions.
My picture of how large-scale space expansion goes involves probes (not humans) being sent out after AGI. Then a reasonable default might be that the plans and values embedded in humanity's first large-scale space settlement initiatives are set by the plans and values of some very large and technologically advanced political faction at the time (capable of launching such a significant initiative by force or unanimity), rather than a smaller number of humans who were early to settle some part of the Solar System.
I then picture most human-originating life to not resemble biological humans (more like digital people). In this case it's very hard to imagine how farming animals would make any sense.
Even with shorter-term and human-led space settlement, like bases on the Moon and Mars, I expect it to make very little logistical sense to farm animals (regardless of the psychological profile of whoever is doing the settlement). The first settlements will be water and space and especially labour constrained, and raising animals is going to look needlessly painful and inefficient without the big economies of scale of factory farms.
That said, if animals are farmed in early settlements, then note that smaller animals tend to be the most efficient at converting feed into human-palatable calories (and also the most space-efficient). For that reason some people suggest insect farming (e.g. crickets, mealworms), which does seem much more likely than livestock or poultry! But another option is bioreactors of the kind being developed on Earth. In theory they could become more efficient than animals and would then make most practical sense (since the capital cost to build the reactor isn't going to matter; taking anything into space is already crazy expensive). Also a lot of food will probably be imported as payload early on; unsure if that's relevant.
So I think I'm saying the cultural attitudes of early space settlers is probably less important than the practical mechanisms by which most of space is eventually settled. Especially if most future people are not biological humans, which kind of moots the question.
I do think it's valuable and somewhat relieving to point out that animal farming could plausibly remain an Earth-only problem!
I endorse many (more) people focusing on x-risk and it is a motivation and focus of mine; I don't endorse “we should act confidently as if x-risk is the overwhelmingly most important thing”.
Honestly, I think the explicitness of my points misrepresents what it really feels like to form a view on this, which is to engage with lots of arguments and see what my gut says at the end. My gut is moved by the idea of existential risk reduction as a central priority, and it feels uncomfortable being fanatical about it and suggesting others do the same. But it struggles to credit particular reasons for that.
To actually answer the question: (6), (5), and (8) stand out, and feel connected.
In this spirit, here are some x-risk sceptical thoughts:
These thoughts make me hesitant about confidently acting as if x-risk is overwhelmingly important, even compared to other potential ways to improve the long-run future, or other framings on the importance of helping navigate the transition to very powerful AI.
But I still existential risk matters greatly as an action-guiding idea. I like this snippet from the FAQ page for The Precipice —
But for most purposes there is no need to debate which of these noble tasks is the most important—the key point is just that safeguarding humanity’s longterm potential is up there among the very most important priorities of our time.
[Edited a bit for clarity after posting]
Thanks for the comment, Owen.
I agree with your first point and I should have mentioned it.
On your second point, I am assuming that ‘solving’ the problem means solving it by a date, or before some other event (since there's no time in my model). But I agree this is often going to be the right way to think, and a case where the value of working on a problem with increasing resources can be smooth, even under certainty.
Thanks! I'm not trying to resolve concerns around cluelessness in general, and I agree there are situations (many or even most of the really tough ‘cluelessness’ cases) where the whole ‘is this constructive?’ test isn't useful, since that can be part of what you're clueless about, or other factors might dominate.
Well, I'm saying the ‘is this constructive’ test is a way to latch on to a certain kind of confidence, viz the confidence that you are moving towards a better world. If others also take constructive actions towards similar outcomes, and/or in the fullness of time, you can be relatively confident you helped get to that better world.
This is not the same thing as saying your action was right, since there are locally harmful ways to move toward a better world. And so I don't have as much to say about when or how much to privilage this rule!