TL:DR: It doesn’t only matter whether an event wipes out humanity, its effect on other life matters too as this affects the probability of intelligent life re-evolving. This could change how we choose to prioritise and allocate resources between different x-risk areas.
Acknowledgements: This post was inspired by a conversation I had with Toby Ord when he visited EA Warwick. I’m very grateful for his ideas and for suggesting I write this. I’m grateful also to Aimee Watts, Simon Marshall, Charlotte Seigmann and others from EA Warwick for feedback and useful discussion.
I have not come across my below argument before, but am not certain it has not been made before.
Introduction
One of the main reasons we care about preventing x-risks is because they could deny the possibility of a grand future in which galactic expansion allows for huge amounts of welfare. Such a future can be attained by humans, but also by other intelligent life that might originate on earth.
I believe we are guilty of a substitution fallacy when considering extinction events. We are substituting:
1) “What is the probability of an event denying a grand future?” with
2) “What is the probability of the event killing all humans/killing humans to the extent that human civilisation never achieves a grand future?”
Why are these two questions different? Because humanity may not recover, but a new species may emerge on Earth and attain this grand future.
An Example
Why is this consideration important? Because the answer to these two questions may be different, and crucially the way they differ may not be the same for each x-risk. Consider the following two scenarios:
- A genetically engineered pathogen is released and kills 99% of all humans. It does not cross into other species.
- An asteroid with a diameter of 15km (larger than the Chixculub asteroid thought to have killed the dinosaurs) collides with the Earth killing 99% of all humans. No land vertebrate weighing more than 25 kg survives.
In each scenario, humanity almost certainly goes extinct. However, the chance of human-level intelligent life re-evolving intuitively seems like it would be very different. Other species are unaffected in scenario A and so other somewhat intelligent species are more likely to evolve into intelligent life that could achieve a grand future, compared to a world where all medium to large vertebrates have been killed.
Even if our understanding here is wrong, it seems very likely that the probability of intelligent life not re-evolving would be different in each scenario.
Suppose in the next century there is a 1% chance of scenario A), and a 0.01% chance of scenario B). If we only think about the substituted question 2 where we think about the extinction of humans, then we care almost “100 times” more about scenario A). But suppose that in scenario A) there is a 70% chance of intelligent life not re-evolving and in scenario B) there is a 95% chance of intelligent life not re-evolving.
Then for question 1, with A), we get a 0.7% chance of a true existential event, and with B) we get a 0.0095% chance of a true existential event. So we still care more about A), but now by only ~70 times more.
This probabilistic reasoning is not complete, as new human-level intelligent life may succumb to these x-risks again.
Another crucial consideration may be the timeline of intelligent life re-evolving. In scenario B), intelligent life may re-evolve but it may take 100 million years, as opposed to 1 million years in scenario A). How to weigh this in the probabilities is not immediately clear, but also gives us reason to reduce the difference in how much we care between the two.
Terminology
A useful distinction may be to differentiate between a “human existential threat” which prevents humanity attaining a grand future and a “total existential threat” which prevents earth originating intelligent life attaining a grand future.
Even this may not be a correct definition of a total existential threat as intelligent life may originate from other planets. This distinction is important as an unaligned AI may threaten life on other planets too whereas climate change on Earth for example only threatens life on Earth. The term “earth existential threat” may then be appropriate for an event which prevents earth originating intelligent life attaining a grand future.
Numerical Values
To put actual values on life re-evolving and timelines is an incredibly difficult task for which I do not have suitable experience. However I offer some numbers here purely speculatively and to illustrate what comparisons we would hope to make.
For probabilities of human extinction from each event I use the probabilities given by Toby Ord in The Precipice.
*If an unaligned AI causes the extinction of humans it seems it would also cause the extinction of any other equally intelligent beings that naturally evolved.
This calculation may not change which risks we are most worried about, but there may be not insignificant changes in probabilities that affect resource allocations once considerations of tractability and neglectedness are factored in. For example, we may invest more heavily into AI research at the expense of biorisk.
How to incorporate the timeline values is less clear, and would require considering a more complex model.
A possible model
The Earth seems like it will be habitable for another 600 million years. Hence there is a time limit on intelligent life re-evolving. We could model some sort of measure N of how long until current life on earth reaches existential security. Let T be the amount of time the earth will remain habitable for. Then in each century there are various existential events with different probabilities (some independent of N, some dependent on N), which could occur and increase N by a certain amount, or trigger a complete fail state. Each century T decreases by 1. The question is whether N reaches zero before T does or a fail state occurs.
Such a model becomes complicated quite quickly with different x-risks with different probability distributions and impacts on N. Even this model is a simplification, as the amount of time for intelligent life to redevelop after an existential event is itself a random variable.
What conclusions can we draw?
It seems that such arguments could cause us to weigh more heavily x-risks that threaten more life on earth than just humans. This could increase how much we care about risks such as global warming and nuclear war compared to biorisk.
We could also conclude that x-risks and GCRs are not fundamentally different. Each sets back the time till existential security is reached, just by radically different amounts.
How could this be taken further?
Further reading and research specific to individual x-risks to compare chances of intelligent life re-evolving.
Further development of a mathematical model to realise how important timelines for re-evolution are.
Note: This is my first post on the EA forum. I am very grateful for any feedback or comments.
Edit: 0.095% changed to 0.0095% for risk of true existential event from a meteor impact in "An Example"
Welcome, and thanks for posting!
"kills 99% of all humans...In each scenario, humanity almost certainly goes extinct."
I don't think that this is true for your examples, but rather that humanity would almost certainly not be directly extinguished or even prevented from recovering technology by a disaster that killed 99%. 1% of humans surviving is a population of 78 million, more than the Roman Empire, with knowledge of modern agricultural techniques like crop rotation or the moldboard plough, and vast supplies built by our civilization. For a dinosaur-killer asteroid, animals such as crocodiles and our own ancestors survived that, and our technological abilities to survive and recover are immensely greater (we have food reserves, could tap energy from the oceans and dead biomass, can produce food using non-solar energy sources, etc). So not only would human extinction be quite unlikely, but by that point nonhuman extinctions would be very thorough.
For a pandemic, an immune 1% (immunologically or behaviorally immune) rebuild civilization. If we stipulate a disease that killed all humans but generally didn't affect other taxa, then chimpanzees (with whom we have a common ancestor only millions of years ago) are well positioned to take the same course again, as well as more distant relatives if primates perished, so I buy a relatively high credence in intelligence life reemerging there.
Hi Carl,
Thank you very much for your comment! I agree with your comment on the human extinction risks that 99% is probably not high enough to cause extinction. I think I wanted to provide examples of human extinction event, but should have been more careful on the exact values and situations I described.
On re-evolution after an asteroid impact, my understanding is that although species such as humans eventually evolved after the impact, had humanity existed at the time of the impact it would not have survived as nearly all land mammals over 25kg went extinct. So on biology alone humans would be unlikely to survive the impact. However I agree our technology could massively alter the probability in our favour.
I hope that if the probabilities of human extinction from both events are lower, my comment on the importance of the effect on other species still holds.
Welcome to the forum!
Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.
So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:
*Shulman & Bostrom 2012 discuss this type of argument, and some complexities in adjusting for observation selection effects
Thanks for your comment Matthew. This is definitely an interesting effect which I had not considered. I wonder whether though the absolute AI risk may increase, it would not affect our actions as we would have no way to affect the development of AI by future intelligent life as we would be extinct. The only way I could think of to affect the risk of AI from future life would be to create an aligned AGI ourselves before humanity goes extinct!
Welcome to the Forum (or to posting)! I found this post interesting, and think it makes important points.
There's been a bit of prior discussion of these sorts of ideas (though I think the sort of model you propose is interesting and hasn't been proposed before). For example:
I've collected these sources because I'm working on a post/series about "crucial questions for longtermists". One of the questions I have in there is "What's the counterfactual to a human-influenced future?" I break this down into:
I think your post does a good job highlighting the importance of the question "How likely is future evolution of moral agents or patients on Earth, conditional on existential catastrophe?"
But it seemed like you were implicitly assuming that what other moral agents would ultimately do with the future would be equally valuable in expectation to what humanity would do? This seems a big question to me, and probably depends somewhat on metaethics (e.g., moral realism vs antirealism). From memory, there's some good discussion of this in "The expected value of extinction risk reduction is positive". (This is also related to Azure's comment.)
And I feel like these questions might be best addressed alongside the question about ETI. One reason for that is that discussions of the Fermi Paradox, Drake Equation, and Great Filter (see, e.g., this paper) could perhaps inform our beliefs about the likelihood of both ETI and future evolution of moral agents on Earth.
Hi Michael, thank you very much for your comment.
I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.
I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that
"While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios."
Therefore it should not affect how we compare different x-risks. For example, if we assumed re-evolved life had a 10% chance of being morally aligned with humanity, this would apply in all existential scenarios and so not affect how we compare them. The question of what being "morally aligned" with humanity means, and whether this is what we want is also a big question I appreciate. I avoided discussing the moral philosophy as I'm uncertain how to consider it, but I agree it is a crucial question.
I also completely agree that considerations of ETI could inform how we consider probabilities of future evolution. It is my planned first avenue of research for getting a better grasp on the probabilities involved.
Thanks again for you comment and for the useful links!
Glad the "crucial questions for longtermists" series sounds useful! We should hopefully publish the first post this month.
This seems a reasonable assumption. And I think it would indeed mean that it's not worth paying much attention to differences between existential risks in how aligned with humanity any later intelligent life would be. But I was responding to claims from your original post like this:
I do think that that is true, but I think how big that factor is might be decreased by possibility that a future influenced by (existentially secure) independently evolved intelligent life would be "less valuable" than a future influenced by (existentially secure) humans. For example, if Alice thinks that those independently evolving lifeforms would do things 100% as valuable as what humans would do, but Bob thinks they'd do things only 10% as valuable, Alice and Bob will differ on how much worse it is to wipe out all possible future intelligent life vs "just" wiping out humanity. And in the extreme, someone could even think that intelligent life would do completely valueless things, or things we would/should actively disvalue.
(To be clear, I don't think that this undercuts your post, but I think it can influence precisely how important the consideration you raise is.)
That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.
Am I reading the 0.1% probability for nuclear war right as the probability that nuclear war breaks out at all, or the probability that it breaks out and leads to human extinction? If it's the former, this seems much too low. Consider that twice in history nuclear warfare was likely averted by the actions of a single person (e.g. Stanislav Petrov), and we have had several other close calls ( https://en.wikipedia.org/wiki/List_of_nuclear_close_calls ).
I believe it is the probability that a nuclear war occurs AND leads to human extinction, as described in The Precipice. I think I would agree that if it was the just the probability of nuclear war, this would be too low, and a large reason the number is small is because of the difficulty for a nuclear war to cause human extinction.
Thank you for this post! Very interesting.
(1) Is this a fair/unfair summary of the argument?
P1 We should be indifferent on anti-speciesist grounds whether humans or some other intelligence life form enjoy a grand future.
P2 The risk of extinction of only humans is strictly lower than the risk of extinction of humans + all future possible (non human) intelligent life form.
C Therefore we should revise downwards the value of avoiding the former/raise the value of the latter.
(2) Is knowledge about current evolutionary trajectories of non-human animals today likely to completely inform us about 're-evolution'? What are the relevant considerations?
Hi, thanks for your questions!
(1) I definitely agree with P1. For P2, would it not be the case that the risk of extinction of humans is strictly greater than the the risk of extinction of humans and future possible intelligent life as the latter is a conjuction of the former? Perhaps a second premise could instead be
P2 The best approaches for reducing human existential risk are not necessarily the best approaches for reducing existential risk to humans and all future possible intelligent life
With a conclusion
C We should focus on the best methods of preventing "total existential risk", not on the best methods of preventing "human existential risk"
(subject to appropriate expected value calculations e.g. preventing a human existential risk may in fact be the most cost effective way of reducing total existential risk).
(2) I think unfortunately I do not have the necessary knowledge to answer these questions. It is something I hope to research further though. It seems that the probability of re-evolution in different scenarios probably has lots of considerations, such as the earth's environment after the event, the initial impact on a species, the initial impact on other species. One thing I find interesting is to consider what impact things left behind by humanity could have on re-evolution. Humans may go extinct, but our buildings may survive to provide new biomes for species, and our technology may survive to be used by "somewhat"-intelligent life in the future.
Thanks for sharing this!
I happen to have made a not-very-good model a month or so ago to try to get a sense of how much the possibility of future species that care about x-risks impacts x-risk today. It's here, and it has a bunch of issues (like assuming that it will take the same amount of time from now for a new species that it took for humans to evolve since the first neuron, assuming that all of Ord's x-risks don't reduce the possibility of future moral agents evolving etc.), and possibly doesn't even get at the important things mentioned in this post.
But based on the relatively bad assumptions in it, it spat out that if we generally expect moral agents to evolve who reach Ord's 16% 100 year x-risk every 500 million years or so (assuming an existential event happens), and that most the value of the future is beyond the next 0.8 to 1.2B years, then we ought to adjust Ord's figure down to 9.8% to 12%.
I don't think either the figure / approach in that should be taken at all seriously though, as I spent only a couple minutes on it and didn't think at all about better ways to try to do this - just writing this explanation of it has shown me a lot of ways in which it is bad. It just seemed relevant to this post and I wasn't going to do anything else with it :).
Thanks for your comment! I'm very interested to hear about a modelling approach. I'll look at your model and will probably have questions in the near-future!
Hey! Your link sends us to this very post. Is this intentional?
Nope - fixed. Thanks for pointing that out.
We could survive by preserving data about humanity (on the Moon or other places), which will be found by the next civilisation on Earth, and they will recreate humans (based on our DNA) and our culture.
Thanks for your comment, I found that paper really interesting and it was definitely an idea I'd not considered before.
My main two questions would be:
1) What is the main value of humanity being resurrected? - We could inherently value the preservation of humanity and it's culture. However, my intuition would be that humanity would be resurrected in small numbers and these humans might not even have very pleasant lives if they're being analysed or experimented on. Furthermore the resurrected humans are likely to have very little agency, being controlled by technologically superior beings. Therefore it would seem unlikely that the resurrected humans could create much value, much less achieve a grand future.
2) How valuable would information on humanity be to a civilisation that had technologically surpassed it? - The civilisation that resurrected humanity would probably be much more technologically advanced than humanity, and might even have it's own AI as mentioned in the paper. It would then seem that it must have overcome many of the technological x-risks to reach that point, so information on humanity succumbing to one may not be that useful. It may not be prepared for certain natural x-risks that could have caused human extinction, but these seem much less likely than manmade x-risks.
Thanks again for such an interesting paper!
The article may reflect my immoralist view point that in almost all circumstances it is better to be alive than not.
Future torture is useless and thus unlikely. Let's look on humanity: as we mature, we tend to care more about other species that lived on Earth and of minority cultures. Torture for fun or for experiment is only for those who don't know how to get information or pleasure in other ways. It is unlikely that advance civilization will deliberately torture humans. Even if resurrected humans will not have full agency, they may have much better live than most people on Earth have now.
Reconstruction of the past is universally interesting. We have a mammoth resurrection project, a lot of archeological studies, Sentinel uncontacted tribe preservation program, etc - so we find a lot of value in studying past, preserving and reconstructing it, and I think it is natural for advanced civilizations.
The x-risks information will be vital for them before they get superintelligence (but humans could be resurrected after it). Imagine that Apollo program would find some data storage on the Moon: it will be one of the biggest scientific discoveries of all times. Some information could be useful for end-of-20th-century humanity, like estimation of the probability of natural pandemics or nuclear wars.
Past data is useful. Future civilization on Earth will get a lot of scientific data from other fields of knowledge: biology, geology, even some math problems may be solved by us which they still not solved. Moreover, they will get access to enormous amount of art, which may have fun value (or not).
The resurrection (on good conditions) here is a part of an acasual deal from our side, similar to Parfit's hitchhiker. They may not take their side of the deal, so there is a risk. Or they may do it much later, after they advance to interstellar civilization and will know that there is a minimal risk and cost for them. For example, if they give 0.0001 of all their resources to us, but colonise a whole galaxy, it is still 10 million stars under human control, or bilion bilions of human beings: much better than extinction.
TL;DR: if there is any value at human existence, it is reasonable to desire resurrection of humanity (under no-torture conditions) + they will get x-risks useful information on earlier stage (end-20th-century equivalent) than they will actually resurrect us (they may do it much later, only if this information was useful, thus closing the deal).
Thanks for your response!
I definitely see your point on the value of information to the future civilisation. The technology required to reach the moon and find the cache is likely quite different to the level required to resurrect humanity from the cache so the information could still be very valuable.
An interesting consideration may be how we value a planet being under human control vs control of this new civilisation. We may think we cannot assume that the new civilisation would be doing valuable things but that a human planet would be quite valuable. This consideration would depend a lot on your moral beliefs. If we don't extrapolate the value of humanity to the value of this new civilisation, we could then ask whether we can extrapolate from how humanity would respond to finding the cache on the moon to how the new civilisation would respond.
If they evolve, say, from cats, they will share the same type-values: power, sex, love to children as all mammals. By token-values will be different as they will like not human children but kittens etc. An advance non-human civilization may be more similar to ours than we-now to Ancient Egyptian, as it would have more rational world models.
This is the biggest argument for me against the consideration. I can easily think that it would take way longer than that for intelligent life to reemerge. It took something like 4.6 billion years for us to evolve and in roughly 0.5 billion years the sun will make life on earth uninhabitable. I guess if other primates survive that is a "good" starting point for evolution but intelligent life doesn't seem to be a necessary step for me for survival.
Considering evolutionary timelines is definitely very hard because it's such a chaotic process. I don't have too much knowledge about evolutionary history and am hoping to research this more. I think after most human existential events, the complexity of the life that remains would be much greater than that for most of the history of the Earth. So although it took humans 4.6 billion years to evolve "from scratch", it could take significantly less time for intelligent life to re-evolve after an existential event as a lot of the hard evolutionary work has already been done.
I could definitely believe it could take longer than 0.5 billion years for intelligent life to re-evolve, but I'd be very uncertain on that and give some credence that it could take significantly less time. For example, humanity evolved "only" 65 million years after the asteroid that caused the dinosaur extinction.
The consideration of how "inevitable" intelligence is in evolution is very interesting. One argument that high intelligence would be likely to re-emerge could be that humanity has shown it to be a very successful strategy. So it would just take one species to evolve high levels of intelligence for there to then become a large number of intelligent beings on Earth again.
(Apologies for my slow reply to your comment!)
Something that seems worth noting is that an existential catastrophe (or "human existential catastrophe") need involve human extinction, nor even "killing humans to the extent that human civilisation never achieves a grand future".
It could involve something like locking in a future that's better for all humans than the current world, with no "extra" human death involved (i.e., maybe people still die of old age but not in a sudden catastrophe), but with us now being blocked from ever creating anywhere near as much value as we could've. This might be a "desired dystopia", in Ord's terms. For example, we might forever limit ourselves to the Earth but somehow maintain it far beyond its "natural lifespan", or we might create vast suffering among non-humans.
I mention this here for two reasons:
Hi Michael, thanks for this comment!
This is a really good point and something I was briefly aware of when writing but did not take the time to consider fully. I've definitely conflated extinction risk with existential risk. I hope that when restricting everything I said just to extinction risk, the conclusion still holds.
A scenario where humanity establishes it's own dystopia definitely seems comparable to the misaligned AGI scenario. Any "locked-in" totalitarian regime would probably prevent the evolution of other intelligent life. This could cause us to increase the risk posed by such dystopian scenarios and weigh these risks more highly.
I think the core points in your article work in relation to both extinction risk and existential risk. This is partly because extinction is one of the main types of existential catastrophe, and partly because some other existential catastrophes still theoretically allow for future evolution of intelligent life (just as some extinction scenarios would). So this doesn't undercut your post - I just wanted to raise the distinction as I think it's valuable to have in mind.
This seems plausible. But it also seems plausible there could be future evolution of other intelligent life in a scenario where humanity sticks around. One reason is that these non-extinction lock-ins don't have to look like jack-booted horrible power-hungry totalitarians. It could be idyllic in many senses, or at least as far as the humans involved perceive it, and yet irreversibly prevent us achieving anything close to the best future possible.
For a random, very speculative example, I wouldn't be insanely shocked if humanity ends up deciding that allowing nature to run its course is extremely valuable, so we lock-in some sort of situation of us being caretakers and causing minimal disruption, with this preventing us from ever expanding through the stars but allowing for whatever evolution might happen on Earth. This could perhaps be a "desired dystopia" (if we could otherwise have done something far better), even if all the humans involved are happy and stay around for a very very long time.
Thanks for the elaboration. I haven't given much consideration to "desired dystopias" before and they are really interesting to consider.
Another dystopian scenario to consider could be one in which humanity "strands" itself on Earth through resource depletion. This could also prevent future life from achieving a grand future.
I think that’d indeed probably prevent evolution of other intelligent life on Earth, or prevent it achieving a grand future. But at first glance, this looks to me like a “premature extinction” scenario, rather than a clear-cut “dystopia”. This is because humanity would still be wiped out (when the Earth becomes uninhabitable) earlier than the point at which extinction is inevitable no matter what we do (perhaps this point would be the heat death of the universe).
But I’d also see it as fair enough if someone wanted to call that scenario more a “dystopia” than a standard “extinction event”. And I don’t think much turns on which label we choose, as long as we all know what we mean.
(By the way, I take the term “desired dystopia” from The Precipice.)
x-risks are dependent on one's value system. if you value some form of non-human life colonizing the universe, then human extinction is not necessarily an x-risk for you and vice versa.
I believe this should be ~70 times more.
Thanks for your comment. "Caring about more" is quite a vague way of describing what I wanted to say. I think I was just trying to say that the risk of a true existential event from A is about 7 times greater than the risk from B (as 0.7/0.095 =~ 7.368) so it would be 7 times not 70 times?
Oh sorry, I must've misread! So the issue seems to be with the number 0.095%. The chance of a true existential event in B) would be 0.01% * 95% = 0.0095% (and not 0.095%). And, this leads us to 0.7/0.0095 =~ 73.68
Ah yes! Thanks for pointing that out!