Given the rate at which existential risks seem to be proliferating, it’s hard not to suspect that unless humanity comes up with a real game-changer, in the long run we’re stuffed. David Thorstad has recently argued that this poses a major challenge to longtermists who advocate prioritising existential risk. The more likely an x-risk is to destroy us, Thorstad notes, the less likely there is to be a long-term future. Nor can we solve the problem by mitigating this or that particular x-risk—we would have to reduce all of them. The expected value of addressing x-risks may not be so high after all. There would still be an argument for prioritising them if we are passing through a ‘time of perils’ after which existential risk will sharply fall. But this is unlikely to be the case.

Thorstad raises a variety of intriguing questions which I plan to tackle in a later post, picking up in part on Owen Cotton-Barratt’s insightful comments here. In this post I’ll focus on a particular issue—his claim that settling outer space is unlikely to drive the risk of human extinction low enough to rescue the longtermist case. Like other species, ours seems more likely to survive if it is widely distributed. Some critics, however, argue that space settlements would still be physically vulnerable, and even writers sympathetic to the project maintain they would remain exposed to dangerous information. Certainly many, perhaps most, settlements would remain vulnerable. But would all of them?

First let’s consider physical vulnerability. Daniel Deudney and Phil (Émile) Torres have warned of the possibility of interplanetary or even interstellar conflict. But once we or other sentient beings spread to other planets, it would render travel between them time-consuming. On the one hand, that would seem to preclude any United Federation of Planets to keep the peace, as Torres notes, but it would also make warfare difficult and—very likely—pointless, just as it once was between Europe and the Americas. It’s certainly possible, as Thorstad notes, that some existential threat could doom us all before humanity gets to this point, but it doesn’t seem like a cert. 

Deudney seems to anticipate this objection, and argues that ‘the volumes of violence relative to the size of inhabited territories will still produce extreme saturation….[U]ntil velocities catch up with the enlarged distances, solar space will be like the Polynesian diaspora—with hydrogen bombs.’ But if islands are far enough apart, the fact that weapons could obliterate them wouldn't matter if there were no way to deliver the weapons. It would still matter, but less so, if it took a long time to deliver the weapons, allowing the targeted island to prepare. Ditto, it would seem, for planets.

Suppose that’s right. We might still not be out of the woods. Deudney warns that ‘giant lasers and energy beams employed as weapons might be able to deliver destructive levels of energy across the distances of the inner solar system in times comparable to ballistic missiles across terrestrial distances.’ But he goes on to note that ‘the distances in the outer solar system and beyond will ultimately prevent even this form of delivering destructive energy at speeds that would be classified as instantaneous.’ That might not matter so much if the destructive energy reached its target in the end. Still, I’d be interested whether any EA Forum readers know whether interstellar death rays of this kind are feasible at all.

There’s also the question of why war would occur. Liberals maintain that economic interdependence promotes peace, but as critics have long pointed out, it also gives states something to fight about. Bolivia and Bhutan don’t get into wars with each other because they don’t—I assume—interact very much at all. One theory about why there’s been so little interstate conflict within South America is that geographical barriers such as the Amazon not only make it hard to fight other countries but also deprive them of reasons for fighting. The same might be true of space settlements.

Even if space settlements were invulnerable to physical attack, this needn’t mean they would be safe. Information—such as misaligned AI or pernicious ideologies—could still spread at the speed of light. ‘Many risks, such as disease, war, tyranny and permanently locking in bad values’, Toby Ord writes in The Precipice, ‘are correlated across different planets: if they affect one, they are somewhat more likely to affect the others too. A few risks, such as unaligned [artificial general intelligence] and vacuum collapse, are almost completely correlated: if they affect one planet, they will likely affect all.’ This leads Ord to conclude that space colonization would be insufficient to eliminate existential risk.

This might be true for vacuum collapse, but is it for misaligned AI? For it to be transmitted to other worlds, it would take not only a sender, but recipients. Ditto for designer diseases, computer viruses and so forth. While many extraterrestrial civilizations would no doubt maintain contact with others, it seems improbable that all would. Some groups that settled new planets would do it to get away from earth civilization for religious, moral or aesthetic reasons. They might have the explicit goal of preserving humanity from existential risk. Many such groups would deliberately isolate themselves, in part out of fear of the scenarios Ord discusses, and work hard to prevent contact with other planets.

These controls probably wouldn’t be airtight. Even the totalitarian states of the twentieth century couldn’t stop everyone from listening to foreign radio broadcasts. And misaligned superintelligence might be extremely clever at tricking other planets to tune in. Still, some of these civilisations would probably find ways to make it difficult, notable artificial superintelligence of their own. If some ASIs succeeded in cutting off their worlds from communication, they might survive indefinitely. Alternatively, some might voluntarily renounce modern technology, or suffer a natural or anthropogenic cataclysm that returned them to Neolithic conditions, and survive a long time. If there were enough settlements, only a minority would have to survive in this way for there to be a big long-term future. In his Philosophy and Public Affairs paper, Thorstad explicitly brackets the effects of AI, and as I’ll argue in a later post, that puts a big asterisk on his argument.

None of this shows that human beings ought to expand into space—at least not without further argument. Trying to do might, as Deudney and Torres argue, create new existential risks that are worse than the ones we already face, or—by spreading life to other planets—multiply the sum of wild animal suffering enormously. Even if some space settlements worked out great, the overall outcome could be bad if most became dystopias. What the possibilities do suggest, however, is that contra Thorstad, there could well be an astronomical amount at stake in how we address both existential and suffering risks.

40

1
0

Reactions

1
0

More posts like this

Comments10
Sorted by Click to highlight new comments since:

Interesting! I'm glad to see engagement with Thorstadt's work, this area is one I found myself less convinced about. 

Interstellar colonisation is insanely difficult and resource intensive, so I expect any widespread dispersal of humanity beyond our solar system to be extremely far off in the future. If you think that that existential risk is high, there may be an extremely small chance we survive to that point. 

I'm also not sure about your point on "misaligned AI's". Firstly, this should be "extinctionist AI's" or something, as it seems very unlikely that all misaligned AI's would actively want to hunt down tiny remnants of humanity. But if they were out to kill us, why would they need a reciever? It's far easier to send an automated killer probe long distances than to send a human colony, so it seems they'd be able to hunt down colonies physically if they need to. 

Thanks! So far as I know, you're right about interstellar travel. But suppose we got a good bit of dispersal within the solar system, say, ten settlements. There seems a reasonable chance that at least some would deliberately break off communication with the rest of the solar system and develop effective means of policing this. They would then--so far as I can tell--be immune to existential risks transmitted by information--e.g., misaligned AI. 

It's true that they could still be vulnerable to physical attack, such as a killer probe, but how likely is this? It's conceivable that either human actors or misaligned ASI could decide to wipe out or conquer hermit settlements elsewhere in the solar system, but that strikes me as rather improbable. They'd have to have a strange set of motives.

It might also be hard to do. Since the aggressor would have to project power across a huge distance, so long as the potential victims had means of detecting a probe or some other attack, we might expect the offence-defence balance to favour the defence. (This wouldn't be true, however, if the reason the settlements had 'gone off the grid' was that they had returned to pre-modern conditions, either by choice or by catastrophe.)

I think any AI that is capable of wiping out humanity on earth is likely to be capable of wiping them out on all the planets in our solar system. Earth is far more habitable than those other planets, so they would be correspondingly fragile and easier to take out. I don't think the distance would be much of an advantage, a current day spaceship only takes 10 years to get to pluto so the playing field is not very far. 

I think your point about motivation is important, but it also applies within Earth. Why would an AI bother to kill off isolated sentinlese islanders? A lot of the answers to that question (like it needs to turn all available resources into computing power) could also motivate it to attack an isolated pluto colony. So if you do accept that AI is an existential threat on one planet, space settlement might not reduce it by very much on the motivation front. 

I broadly agree with the arguments here. I also think space settlement has a robustness to its security that no other defence against GCRs does - it's trivially harder to kill all of more people spread more widely than it is to kill of a handful on a single planet. Compare this to technologies designed to regulate a single atmosphere to protect against biorisk, AI safety mechanisms that operate on AGIs whose ultimate nature we still know very little of, global political institutions that could be subverted or overthrown, bunkers on a single planet, etc, all of which seem much less stable over more than a century or so.

It might be that AGI/vacuum decay/some other mechanism will always be lurking out there will the potential of destroying all life, and if so nothing will protect us - but if we're expected value maximisers (which seems to me a more reasonable strategy than any alternative), we should be fairly optimistic about scenarios where it's at least possible that we can stabilise civilisation.

If you haven't seen it, you should check out Christopher Lankhof's Security Among the Stars, which goes in depth on the case for space settlement.

You might also want to check out my recent project that lets you model the level of security afforded by becoming multiplanetary explicitly.

Thanks for the piece. I think there's an unexamined assumption here about the robustness of non-earth settlement. It may be that one can maintain a settlement on another world for a long time, but unless we get insanely lucky, it seems unlikely to me that you live on another planet without sustaining technology at or above our current capabilities. It may also be that in the medium-term these settlements are dependent on Earth for manufacturing resources etc. which reduces their independence.

This isn't fatal to your thesis (especially in the long-long term), but I think having a high minimum technology threshold does undercut your thesis to some extent.

I don't think anyone's arguing current technology would allow self-sufficiency. But part of the case for offworld settlements is that they very strongly incentivise technolology that would.

In the medium term, an offworld colony doesn't have to be fully independent to afford a decent amount of security. If it can a) outlast some globally local catastrophe (e.g. a nuclear winter or airborne pandemic) and b) get back to Earth once things are safer, it still makes your civilisation more robust. 

Great post! I've never been convinced that the Precipice ends when we become multi-planetary. So I really enjoyed this summary and critique of Thorstad. And I might go even further and argue that not only does space settlement not mitigate existential risk, but it actually might make it worse. 

I think it's entirely possible that the more planets in our galaxy that we colonise, the higher the likelihood of the extinction of life in the universe will be. It breaks down like this:


Assumption 1: The powers of destruction will always be more powerful than the powers of construction or defence. i.e. at the limits of technology, there will be powers that a galactic civilisation would not be able to defend against if they were created. Even if the colonies do not communicate with others and remain isolated.

Examples: 

  • Vacuum collapse (an expanding bubble of the true vacuum destroys everything in the universe). 
  • unaligned superintelligence. I think an unaligned superintelligence would be able to destroy a galactic civilisation even if an aligned superintelligence was trying to protect it because of assumption 1. Especially if the superintelligence was aligned with destroying everything. 
  • self-replicating robots. Spaceships that mine resources on a planet to replicate themselves, and then move on. This could quickly become 
  • Space lasers. Lasers that travel at the speed of light through the vacuum of space. No planet would be able to see them coming so might not be able to defend against them. This favours a strike-first strategy - the only way to protect yourself is to destroy everyone else before they destroy you. 

Assumption 2: For any of the above examples (only one of them has to be possible), it would only take one civilisation in the galaxy to create one of them (by accident or otherwise) and all life in the galaxy could be at risk.

Assumption 3: It would be extremely difficult to centrally govern all of these colonies and detect the development of these technologies as each colony will be lightyears apart. It would take potentially thousands of years to send and receive messages between the different colonies.

Assumption 3: The more colonies that exist in our galaxy, the higher the likelihood that one of those galaxy-ending inventions will be created. 


So if the above is true, then I see 3 options:

  1. We colonise the galaxy and all life in the universe becomes extinct due to the above argument. No long term future. 
  2. Once we start colonising exoplanets, there's no stopping the wave of galactic colonisation. So we stay on Earth or within our own solar system until we can figure out a governance system that works to protect us against galactic civilisation destroying x-risks. This limits the importance of the long term future. 
  3. We colonise the galaxy with extreme surveillance of every colony through independently acting artificial intelligence systems that are capable of detecting and destroying any dangerous technologies. This sounds a lot like it could become an s-risk/ devolve into tyranny, but it might be the best option.

I would like to look into this further. If it's true then longtermism is pretty much bust and we should focus on saving animals from factory farming instead... or solve the galaxy destroying problem... it would be nice to have a long pause to do that. 

Thanks! It seems to me that we should be cautious about assuming that attackers will have the advantage. IR scholars have spent a lot of time examining the offence-defence balance in terrestrial military competition, and while there's no consensus—even about whether a balance can be identified—I think it’s fair to say that most scholars who find the concept useful believe it tends to favour the defence. That seems particularly plausible when it’s a matter of projecting force at interstellar distances—though if space lasers are possible it could be a different matter (I’d like to know more about this, as I noted in my original post).

If, moreover, attack were possible, it might be with the aim not of destruction, but of conquest. If it succeeded, so long as it didn’t lead to outright extinction, this could still mean astronomical suffering. That is a problem with Thorstad’s argument which I’ll pick up in a subsequent post—it treats existential risks as synonymous with extinction ones.

Space lasers don't seem as much of a threat as Jordan posits. They have to be fired from somewhere. If that's within the solar system they're targeting, then that system will still have plenty of time to see the object that's going to shoot them arriving. If they're much further out, it becomes much harder both to aim them correctly and to provide enough power to keep them focused, and the source needs to be commensurately more powerful (as in more expensive to run), and with a bigger lens, so more visible while under constructive and more vulnerable to conventional attack. Or you could just react to the huge lens by building a comparatively tiny mirror protecting the key targets in your system. Or you could build a Dyson swarm and not have any single target on which the rest of the settlement depends.

This guy estimates max effective range of lasers vs anything that can react (which, at a high enough tech level includes planets) at about one light second.

Self-replicating robots don't seem like they have any particular advantage when used as a weapon over ones with more benign intent.

Executive summary: The post argues that settling outer space can significantly reduce the risk of human extinction, contrary to David Thorstad's claims that the proliferation of existential risks reduces the expected value of addressing them through longtermist efforts.

Key points:

  1. Spreading to other planets would make interplanetary conflict difficult and pointless, reducing the physical vulnerability of space settlements.
  2. While information risks like misaligned AI could still spread between settlements, some settlements might deliberately isolate themselves to preserve humanity from such risks.
  3. If even a minority of settlements survive in isolation or a low-tech state, there could still be a "big long-term future" for humanity.
  4. The author acknowledges that space settlement efforts could also create new existential risks, but suggests the possibilities of successful space settlements deserve more consideration than Thorstad gives them.
  5. The author plans to further address Thorstad's arguments and the broader challenge of existential risks in a future post.
  6. The author's key crux is whether interstellar "death rays" are a feasible threat to widespread space settlements.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities