Co-authored by Nick Beckstead, Peter Singer, and Matt Wage
Many scientists believe that a large asteroid impact caused the extinction of the dinosaurs. Could humans face the same fate?
It’s a possibility. NASA has tracked most of the large nearby asteroids and many of the smaller asteroids. If a large asteroid were found to be on a collision course with Earth, that could give us time to deflect the asteroid. NASA has analyzed multiple options for deflecting an asteroid in this kind of scenario, including using a nuclear strike to knock the asteroid off course, and it seems that some of these strategies would be likely to work. The search is, however, not yet complete. The new B612 foundation has recently begun a project to track the remaining asteroids in order to “protect the future of civilization on this planet.” Finding one of these asteroids could be the key to preventing a global catastrophe.
Fortunately, the odds of an extinction-sized asteroid hitting the earth this century are low, on the order of one in a million. Unfortunately, asteroids aren’t the only threats to humanity’s survival. Other potential threats stem from bio-engineered diseases, nuclear war, extreme climate change, and dangerous future technologies.
Given that there is some risk of humanity going extinct over the next couple of centuries, the next question is whether we can do anything about it. We will first explain what we can do about it, and then ask the deeper ethical question: how bad would human extinction be?
The first point to make here is that if the risks of human extinction turn out to be “small,” this shouldn’t lull us into complacency. No sane person would say, “Well, the risk of a nuclear meltdown at this reactor is only 1 in 1000, so we’re not going to worry about it.” When there is some risk of a truly catastrophic outcome and we can reduce or eliminate that risk at an acceptable cost, we should do so. In general, we can measure how bad a particular risk is by multiplying the probability of the bad outcome by how bad the outcome would be. Since human extinction would, as we shall shortly argue, be extremely bad, reducing the risk of human extinction by even a very small amount would be very good.
Humanity has already done some things that reduce the risk of premature extinction. We’ve made it through the cold war and scaled back our reserves of nuclear weapons. We’ve tracked most of the large asteroids near Earth. We’ve built underground bunkers for “continuity of government” purposes, which might help humanity survive certain catastrophes. We’ve instituted disease surveillance programs that track the spread of diseases, so that the world could respond more quickly in the event of a large-scale pandemic. We’ve identified climate change as a potential risk and developed some plans for responding, even if the actual response so far has been lamentably inadequate. We’ve also built institutions that reduce the risk of extinction in subtler ways, such as decreasing the risk of war or improving the government’s ability to respond to a catastrophe.
One reason to think that it is possible to further reduce the risk of human extinction is that all these things we’ve done could probably be improved. We could track more asteroids, build better bunkers, improve our disease surveillance programs, reduce our greenhouse gas emissions, encourage non-proliferation of nuclear weapons, and strengthen world institutions in ways that would probably further decrease the risk of human extinction. There is still a substantial challenge in identifying specific worthy projects to support, but it is likely that such projects exist.
So far, surprisingly little work has been put into systematically understanding the risks of human extinction and how best to reduce them. There have been a few books and papers on the topic of low-probability, high-stakes catastrophes, but there has been very little investigation into the most effective methods of reducing these risks. We know of no in-depth, systematic analysis of the different strategies for reducing these risks. A reasonable first step toward reducing the risk of human extinction is to investigate these issues more thoroughly, or support others in doing so.
If what we’ve said is correct, then there is some risk of human extinction and we probably have the ability to reduce this risk. There are a lot of important related questions, which are hard to answer: How high a priority should we place on reducing the risk of human extinction? How much should we be prepared to spend on doing so? Where does this fit among the many other things that we can and should be doing, like helping the global poor? (On that, see www.thelifeyoucansave.com) Does the goal of reducing the risk of extinction conflict with ordinary humanitarian goals, or is the best way of reducing the risk of extinction simply to improve the lives of people alive today and empower them to solve the problem themselves?
We won’t try to address those questions here. Instead, we’ll focus on this question: How bad would human extinction be?
One very bad thing about human extinction would be that billions of people would likely die painful deaths. But in our view, this is, by far, not the worst thing about human extinction. The worst thing about human extinction is that there would be no future generations.
We believe that future generations matter just as much as our generation does. Since there could be so many generations in our future, the value of all those generations together greatly exceeds the value of the current generation.
Considering a historical example helps to illustrate this point. About 70,000 years ago, there was a supervolcanic eruption known as the Toba eruption. Many scientists believe that this eruption caused a “volcanic winter” which brought our ancestors close to extinction. Suppose that this is true. Now imagine that the Toba eruption had eradicated humans from the earth. How bad would that have been? Some 3000 generations and 100 billion lives later, it is plausible to say that the death and suffering caused by the Toba eruption would have been trivial in comparison with the loss of all the human lives that have been lived from then to now, and everything humanity has achieved since that time.
Similarly, if humanity goes extinct now, the worst aspect of this would be the opportunity cost. Civilization began only a few thousand years ago. Yet Earth could remain habitable for another billion years. And if it is possible to colonize space, our species may survive much longer than that.
Some people would reject this way of assessing the value of future generations. They may claim that bringing new people into existence cannot be a benefit, regardless of what kind of life these people have. On this view, the value of avoiding human extinction is restricted to people alive today and people who are already going to exist, and who may want to have children or grandchildren.
Why would someone believe this? One reason might be that if people never exist, then it can’t be bad for them that they don’t exist. Since they don’t exist, there’s no “them” for it to be bad for, so causing people to exist cannot benefit them.
We disagree. We think that causing people to exist can benefit them. To see why, first notice that causing people to exist can be bad for those people. For example, suppose some woman knows that if she conceives a child during the next few months, the child will suffer from multiple painful diseases and die very young. It would obviously be bad for her child if she decided to conceive during the next few months. In general, it seems that if a child’s life would be brief and miserable, existence is bad for that child.
If you agree that bringing someone into existence can be bad for that person and if you also accept the argument that bringing someone into existence can’t be good for that person, then this leads to a strange conclusion: being born could harm you but it couldn’t help you. If that is right, then it appears that it would be wrong to have children, because there is always a risk that they will be harmed, and no compensating benefit to outweigh the risk of harm.
Pessimists like the nineteenth-century German philosopher Arthur Schopenhauer, or the contemporary South African philosopher David Benatar accept this conclusion. But if parents have a reasonable expectation that their children will have happy and fulfilling lives, and having children would not be harmful to others, then it is not bad to have children. More generally, if our descendants have a reasonable chance of having happy and fulfilling lives, it is good for us to ensure that our descendants exist, rather than not. Therefore we think that bringing future generations into existence can be a good thing.
The extinction of our species – and quite possibly, depending on the cause of the extinction, of all life - would be the end of the extraordinary story of evolution that has already led to (moderately) intelligent life, and which has given us the potential to make much greater progress still. We have made great progress, both moral and intellectual, over the last couple of centuries, and there is every reason to hope that, if we survive, this progress will continue and accelerate. If we fail to prevent our extinction, we will have blown the opportunity to create something truly wonderful: an astronomically large number of generations of human beings living rich and fulfilling lives, and reaching heights of knowledge and civilization that are beyond the limits of our imagination.
This article is generally sound, but I'm not sure I agree with the idea that the experiences of the current generation are trivial compared to the possibility of future generations. Future generations don't exist yet and therefore have nothing to lose, while living creatures have everything to lose.
Sure, a human could be conceived and live a reasonably happy life (if they're lucky), but they could also never be conceived and be none the worse. When we, as living humans, think of the possibility of never being born, we are saddened because we know what we have to lose, but a pair of non-fertilized zygotes has no such feelings.
Because they're only newly conscious? The same can be said of your sef tomorrow morning, but you'll have memories and experiences that will quickly orient you to your identity, your place in the wotld and your desires, as will future generations.
But I'm already alive, so if I'm no longer alive tomorrow morning it'll mean that I died during the night - which involves a certain amount of suffering. If I die without knowing it, it would cause me no suffering at all, but my loved ones would still suffer, and my life, which is already established as being happy, would have been cut short for no good reason.
None of these things are true for a non-conceived human because they can't feel pain and have no established ability (or desire) to experience a happy life.
I have a minor philosophical nitpick.
There are (checks Wikipedia) 400ish nuclear reactors, which means if everyone followed this reasoning, the risk of a nuclear meltdown would be pretty high.
Existential risks with low probabilities don't add up in the same way. It's my belief that the magnitude of a risk equals the badness times the probability (which for xrisk comes out to very, very bad) but not everyone might agree with me, and I'm not sure the nuclear reactor example would convince them.
Has anyone done an EA evaluation of (formerly B612) Sentinel Mission's expected value?
Not Sentinel Mission is particular, but some work has been done on asteroids. Basically, the the value of asteroid surveillance for reducing extinction risk is small as we have already identified basically all of the >1km asteroids, and that's the size that they would need to be to cause an extinction-level catastrophe.
That's to say nothing of the prospects for learning to intercept asteroids, or the prospects of preventing events that fall short of an extinction-level threat.
The other thing to note here is that we've survived asteroids for lots of geological time (millions of years), so it would be really surprising if we got taken out by a natural risk in the next century. That's why people generally think that tech risks are more likely.
I can't find much online but there's this, and you could also search for Carl Shulman and Seth Baum, who might've also covered the issue.