Sharing from my personal blog: https://spiralprogress.com/2024/11/18/the-tyranny-of-existential-risk/
“Moral demandingness” refers to the objection that a moral theory cannot be correct if it requires too much of its adherents. This has typically been used in reference to severe but still basically mundane demands that we, for instance, give much more to charity.
When it comes to existential risk however, the demand goes from severe to nearly crippling. If there really is a grand future at stake, in the literal sense of an unfathomably large cosmic endowment, it is difficult to justify spending any time or resources on any project other than the prevention of threats to this future.
This goes beyond “cause prioritization” it is the total subjugation of all other values. How should we confront this demand?
- Radical uncertainty
- Solution: If we simply cannot predict the future or the impact our actions will have on it, it stands to reason that there is no demand on us to attempt to influence the future for the better.
- Counter-argument: But surely there is some non-zero signal? We may be highly uncertain, but to accept this solution, you have to believe that we are so uncertain that there is almost no ability to shape the future in expectation. And even if you feel this is true currently, you have to further believe that we have no prospects for improving our ability to forecast, such that this uncertainty is inescapable.
- Reductio Ad Absurdum
- Solution: If following utilitarian ethics to this degree results in such an absurdly demanding conclusion, that is a strong against utilitarian ethics. One man’s modus ponens…
- Counter-argument: How much are you willing to abandon? Even a more “commonsense” morality that simply states “it is good to save human lives if you have the opportunity” demands immense care for the far future. Trying to come up with a moral theory that does not result in some kind of extreme demandingness is harder than it looks, and you might not like where that road takes you either.
- Aliens
- Solution: The cosmic endowment is only so formidable if we have the potential to colonize the universe and fill it with life (or some other kind of morally-relevant experience). But if most of the universe is already taken, or if it seems likely that our own world is already part of some alien experiment, this endowment was never ours to begin with.
- Counter-argument: This cashes out as a reduction of the expected value of the cosmic endowment proportional to your credence in this form of alien life. But even if you’re 99% confident, or 99.999%, the cosmic endowment remains, in expectation, large enough to motivate the same basic logic.
- Against Safetyism
- Solution: Even if we are primarily concerned with preventing threats to the far future, attempting to do so directly is not the best strategy. Rather than trying to prevent progress from happening, it is best to tinker, explore and create, to remain curious, and then to fend off dangers as they manifest, rather than working to control all outcomes.
- Counter-argument: This might be instrumentally true, but it doesn’t challenge the fundamental set of claims, so it is only a variation in tactics rather than in end goal. Even if we take this to be the best approach, we are left with the demandingness of the original formulation. The alternative counterargument is that for some kinds of risks, this approach fails because even a single serious safety incident could be so catastrophic that we don’t get the change to learn and improve.
- Self-Care
- Solution: If demandingness is actually “crippling”, the best way to approach the problem is to start by chilling out a bit. We won’t get anywhere if we are too busy working to sleep, eat, and take care of ourselves. Even more “luxurious” goods like friendship and love can be justified through their ability to increase our long term productivity and motivation. Moreover, living relatively “normal” lives is the best way to promote the movement and avoid alienating people who could contribute, but won’t if we all look like starved, sleep-deprived and unwashed zealots.
- Counter-argument: This collapses as timelines get shorter and it becomes less tenable to believe that these kinds of luxuries get amortized. But also: is this not just super bleak? We are freed in some ways but worse off in others. And self-care as a recruitment strategy just feels dishonest and cumbersome, and leaves you optimizing for the appearance of self-care rather than actually having a life.
- Sign-uncertainty
- Solution: Maybe we don’t have really radical uncertainty in the sense of not knowing at all what the future holds, but we are uncertain about the direction of our actions. Maybe promoting AI safety led to faster AI development? Maybe trying to control compute leads to really authoritarian outcomes that are bad in their own way?
- Counter-argument: Again, this is fine, but doesn’t free you from the second-order burden of investing in epistemic, and relies on the tenuous claim is that uncertainty is so overwhelming that we don’t even have any sense of things being good in expectation.
- Hedonism
- Solution: It is not about coming up with a new moral theory that avoids this degenerate outcome, it is just about abandoning demanding moral theories altogether.
- Counter-argument: Hedonism can just as easily lead to a similar kind of demandingness where the upshot is that you live forever and experience tremendous amounts of pleasure, and must therefore dedicate all your resources to making this happen as long as there is even a sliver of hope.
- Reframing
- Solution: Is demandingness so bad? Isn’t it actually a great feeling to know that so many lives are in your hands, and that you have the potential to do immense good in your lifetime? If you could go back in time and command the army to defeat Hitler and free the Jews even a week sooner, wouldn’t you jump at the opportunity even if it meant living in trenches?
- Counter-argument: Acts of heroism, even those entailing great sacrifice, feel noble and inspiring in the moment. They feel far less so when the demand is: you are young and healthy and must dedicate your entire life to this one cause. Moreover, I will make the obvious academic philosophy point that demandingness is not merely giving up on promoting other values, it might entail acting very far against them. For instance, torturing some huge number of kittens today to reduce the odds of existential-risk by even a tiny fraction of a percent. This cannot be so easily reframed.
- Pessimism
- Solution: What if the far future is bad actually? It could be that even where we colonize the universe, it is just not a very happy place. Maybe humans are happy but we do so much factory farming at increasingly high efficiency and cause tremendous suffering as we grow. Maybe this is already happening under capitalism relative to the past. Is this kind of future worth defending?
- Counter-argument: This might get you out of the most straightforward “threat-prevention” demand, but it does not get you out of the equally stringent demand on trajectory change. If the far future is at risk of being really bad, you should work very hard to make it go as well as possible. Really extreme pessimists might say this give us a reason to actually bring about threats, which gets you away from some forms of x-risk discourse towards something pretty new, but that new thing remains equally demanding.
- Accept the argument as stated
- Solutions: Maybe all these reasons are themselves a kind of reductio ad absurdum against not caring about existential risk. Aliens? Trying to help leads to harm? Abandoning ethics? Really? Shouldn’t we dispense with the mental gymnastics and just embrace the pretty obvious, commonsense and straightforward conclusion that it’s worth working very hard to prevent literal extinction?
- Counter-argument: We should be willing to go to pretty great lengths to avoid the conclusion that everything in our lives must be sacrificed to promote a cause, no matter how good or noble the cause is. These great lengths might not be sufficient, and at the end of the day the obligation may still remain, but if you actually take this demand seriously, you should also take seriously wanting to be free from it.
Some of you will cynically shrug and see this entire exercise as just kind of neurotic and silly. But seriously what is actually your view? Do you have better ideas that are coherent and that you’re willing to say out loud? Are you sure you’re not just averting your eyes?
Executive summary: The concept of existential risk creates an overwhelming moral burden that seemingly demands we sacrifice everything else to prevent extinction, yet this extreme demandingness raises serious philosophical and practical challenges that deserve careful consideration.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.