This is an abbreviated version of the new 80,000 Hours problem profile on space governance.
Introduction
Over the last four decades, the cost to launch a kilogram of payload into space has fallen from roughly $50,000 (for NASA’s Space Shuttle) to less than $1,500 (for SpaceX’s Falcon Heavy). With its new reusable designs, SpaceX aims to further cut launch costs to around $10 per kilogram. Cheap, reusable rocket technology could mark the beginning of a new ‘space race,’ with the frequency of launches potentially increasing from hundreds per year to hundreds per day.
It’s worth taking seriously some of the crazier ways this could play out. If things go well, we could choose for our time on Earth to become just the first stage of a journey into space. We might eventually make use of an almost limitless supply of material resources orbiting around the Sun, and begin to establish self-sustaining communities living beyond Earth. In the longer run, very large numbers of people could live beyond our home planet. A spacefaring future for humanity would make us resilient to disasters local to one planet, and it could also just become varied and expansive compared to remaining Earthbound, in ways that are hard to imagine now.
Ultimately, the sheer scale of the accessible universe makes the question of what we eventually do with and within it enormously important. If the human story ends before spreading beyond Earth, perhaps we would have missed out on almost all the valuable things we could have reached.
But it’s also easy to see things going wrong. The satellites in ‘low-Earth orbit’ are critical infrastructure, but could be unusually easy to disrupt or disable. Competition over outer space, or just ambiguity over issues of liability, could increase the risk of a great power conflict or lead to an anti-satellite arms race. Different actors unilaterally competing to land the first people on Mars, or build the first permanent structures on the Moon, could cement uncooperative norms around exploring space that persist long into the future. And if different groups rush to independently settle beyond Earth in a relatively ungoverned way, it could be far more difficult to get humanity-wide agreement — such as to prohibit a powerful weapons technology, or pursue a period of ‘reflection’ before we embark down a path which would be hard to backtrack.
But simply trying to delay potentially risky moves in space probably isn’t the only strategy, and sometimes it might even be a bad one. As with the development of artificial intelligence, serious delays sometimes require an infeasible amount of multilateral agreement — because private actors are incentivised by a growing private space industry, and national actors by concerns about reconnaissance and security capabilities. Some of the best ways to make a positive difference will instead be to help navigate away from the risks (and toward the potential benefits), given whatever rate of progress the world is making.
An especially promising way to do this could be to decide in advance how to govern activities in outer space — such as how to handle many times more space debris, how to resolve disputes over property or allocate property in advance, and how to restrict the use of weapons in space.[1] Because these sorts of governance mechanisms are currently lagging far behind this new race for space driven by rapidly falling launch costs, now could be an unusually influential time for humanity’s future in space.
Why could this be a pressing problem?
Almost all of humanity’s long-run future could lie in space — it could go well, but that’s not guaranteed
If the cost of travelling to other planetary bodies continues the trend in the chart above and falls by an order of magnitude or so, then we might begin to build increasingly permanent and self-sustaining settlements. Truly self-sustaining settlements are a long way off, but both NASA and China have proposed plans for a Moon base, and China recently announced plans to construct a base on Mars.
Building a semi-permanent presence on Mars will be very, very hard. Mars’s atmosphere is around 1% as dense as Earth’s, and the surface receives around 50 times the amount of radiation that we get on Earth. Plus, Mars’s soil is toxic to humans and unsuitable for growing plants without being decontaminated. Initially, the base will require a continual supply of resources, parts, fuel, and people from Earth.
But if we wanted to, it looks like we could eventually get ambitious. We might reach material self-sufficiency, including terraforming Mars to the point at which the atmosphere is breathable. The point at which such settlements become self-sustaining is a critical one, because that’s roughly the point where they might be useful for recovering if a catastrophe occurs on Earth. But we're not at all close yet: building a self-sustaining settlement will be slow, expensive, and brutally difficult.
People (in some form) might one day also travel and even settle beyond the solar system. The technology doesn’t yet exist, so it would be naive to try describing it in detail. But we don’t yet know of any insurmountable obstacles — such as from the laws of physics, costs, or time constraints — to spreading very far through space, and even to other galaxies. As hazy as this all is, we shouldn’t rule out the possibility that people might one day spread very widely throughout space, such that almost all the people who live in the future eventually live beyond Earth.
Without intervention, the Earth will likely be rendered uninhabitable within about one billion years; while stars will continue to be capable of supporting life for at least tens of thousands of times longer.[2] So a spacefaring future for humanity might not only support far more people than an Earthbound future, but it could last far longer.
This could be very good, or very bad.
If all goes well, with abundant energy, resources, and literal space, our descendents might one day realise grand, desirable futures — some very hard to imagine from our perspective.
On the other hand, life beyond Earth might be so dominated by competition, conflict, disagreement, or adversity, that it could end up being bad overall. Maybe the openness of space would tip the balance in favour of military offence over defence, or reward the most greedy pioneers whose aim is just to lay claim to more territory than their neighbours. Unlike on Earth, it might be literally impossible to escape to a friendlier regime, and the large distances between groups could mean far less natural pressure to conform to the cooperative norms of the ‘neighbours,’ so it could be easier for values to drift in a bad direction.
Whether a future in space goes well could significantly depend on how it’s governed now
There are some reasons to expect that a lot of the variance between these good and bad outcomes could depend on how space ends up being governed.
We have examples of good and bad governance on Earth, and the quality of life under those regimes normally depends closely on the quality of governance. In particular, by providing forums for coordinating toward shared goals, and making the threat of a collective response to aggression more credible, effective international and bilateral governance seem to have reduced the risk of serious conflict between countries.
[...]
Bad governance could also lead to some of the worst imaginable futures, especially if you have some reason to think totalitarian regimes might be easier to maintain beyond Earth.
Either way, it looks like governance in space will go a long way to determining how well or badly space settlements turn out. But that doesn't show that space governance is now a pressing problem. For that, we'd need to think that how space is governed in the longer run could end up depending in some predictable way on how it’s governed in the next few decades. This is far from certain — but if it's true, then shaping space governance now could matter enormously. So how might it work?
In one scenario, we might gradually spread to self-sustaining settlements beyond Earth, over the course of a century or longer. In this case, it could be informative to look at how some countries’ early constitutions have influenced their trajectory as they grew much larger over decades or centuries.[4]
There’s another scenario in which things happen much faster, and more dramatically. This is because it may be possible to build small and extremely fast-travelling probes, which, once launched, could build settlements based on the blueprints we give them. In fact, they could replicate indefinitely, similar to how an acorn turns earth and sunlight into an oak tree, which then produces many acorns. Because of this self-replicating possibility, the probes we launch in this ‘explosive’ period might eventually settle most of the places that will ever be settled, which is perhaps a significant fraction of the entire accessible universe.[5] Yet, because we could launch many of these probes all at once, this could all happen very quickly — and it could be difficult to reverse our decisions afterwards.
Of course, biological humans wouldn’t come along for the ride in this scenario. But these probes might carry the ingredients to run or recreate at least human minds, by storing or instantiating them digitally.[6]
If something even resembling this scenario plays out, it would be a pivotal moment for humanity. We could determine the values and governance structures that get sent from Earth, and those things might then become ‘locked in’ for an extraordinarily long period of time. Holden Karnofsky[7] writes in his blog:
[…] whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.
If either of these scenarios happens this century, it seems important to begin thinking seriously about how to positively influence the process. For example:
- How should ownership and property be allocated?
- What could ideal constitutions (or similar) look like?
- What rules do we want in place for sending instructions to unmanned spacecraft after they’ve left Earth?
If these scenarios don't sound wildly implausible to you — and you think that advance thinking could meaningfully improve the odds that they go well — then you might think that this could be the most important way in which space governance ends up mattering.
[...] This is of course a very speculative case. Next, we’ll consider a more immediate and concrete reason for working on space governance.
Effective arms control in space could reduce the risk of conflict back on Earth
More and more critical infrastructure is getting placed in orbit, while governance frameworks about space conflict remain weak and ambiguous. Nearly 4,000 satellites already operate in a region called low-Earth orbit. We rely on this network of satellites for communications, GPS, remote sensing, and imaging useful for disaster relief.
A situation where satellites are especially vulnerable to attack wouldn’t just be bad because we could lose civilian infrastructure. Reconnaissance satellites are also used by militaries for early warnings of ballistic missile launches, detecting nuclear explosions, and spotting aggressive movements with photography or radar. If these information-gathering satellites get disabled, the country relying on them would suddenly be far less certain about whether and when they are under attack, making escalation from perceived provocation more likely.
At the same time, India, China, the US, and Russia all appear to be developing some form of anti-satellite weapon systems — meaning ground-to-air, air-to-air, or cyberweapons designed to disable enemy satellites (utility or military). Proposed international frameworks for either banning or controlling these new weapons have not yet materialised. This combination of fragility and uncertainty could make conflict in space especially easy to trigger, in turn increasing the risk of conflict back on Earth. This kind of conflict would most likely take place between great powers, given their disproportionate presence in space.
We know that disarmament agreements can work: for instance, the START and New START treaties between the United States and Russia successfully reduced and limited stockpiles of strategic nuclear weapons. So a promising way to reduce the risk of great power conflict could be to work on pushing for disarmament agreements in space — with a special focus on rules against targeting enemy reconnaissance satellites.[8]
Now could be an especially good opportunity to influence space governance
But how easy is it to shape space governance? In fact, it looks like there could be some unusually big opportunities to do so now and in the near future.
Current international governance frameworks for space are sparse and out of date, and because private companies are suddenly getting involved in space, important decisions are likely to be made soon. The most significant international agreement to date is the Outer Space Treaty, which entered into force in 1967 — more than half a century ago.[9] There was an attempt to get countries to agree on some fairly demanding rules in 1979 with the Moon Agreement, but it almost entirely failed.[10] [11]
[...]
Meanwhile, the private space industry looks set to more than double in size in the next few decades. Today, it’s worth just over $300 billion, and global government space budgets total to around $70 billion.[12] The investment bank Morgan Stanley anticipates that the private space industry could be worth more than $1 trillion by 2040.[13]
Several major space governance agreements are already being discussed, and stand some chance of being adopted within a decade. For instance, the Proposed Prevention of an Arms Race in Space Treaty is currently being discussed in the Conference on Disarmament, a forum in the United Nations. In 2020, NASA and the US Department of State announced the Artemis Accords, an effort to establish an international framework for cooperation around space exploration beginning outside of the UN.
But perhaps the most important (and most urgent) news is that in late 2021 the Secretary General of the United Nations announced a major new agenda,[14] which includes a proposed “Summit of the Future” conference to take place in 2023. As part of that conference, the agenda calls for a “a multi-stakeholder dialogue on outer space [...] bringing together Governments and other leading space actors” whose aims would be to “seek high-level political agreement on the peaceful, secure and sustainable use of outer space.” It also notes that existing international arrangements provide “only general guidance” on “the permanent settlement of celestial bodies and responsibilities for resource management” — implying this should be corrected. It seems likely that the organisers will announce a call for proposals sometime before this 2023 conference, in order to collect ideas from the wider research community. Perhaps we’ll see a major new international space treaty emerge from this or subsequent conferences — and perhaps you could help shape it.
[In short,] the ratio of likely importance to actual funding and activity could be unusually high for space governance right now — meaning early work could be more impactful than work later.
Acting early may be especially important for arms control in space. In general, it should be easier to get agreement when fewer actors have capabilities for a given weapons technology, because there are fewer competing interests to coordinate. Likewise for when those actors have invested less in developing weapons capabilities, since they have less to lose by agreeing to limits on their use. Effective arms control may be easier still if it is entirely preemptive: if a weapon hasn’t yet been built or tested by any actor.[15]
[...]
There are identifiable areas to make progress on
Avoiding premature lock-in
Humanity should be aiming to keep its (positive) options open — we have very little idea about what kind of future on Earth or in space would be best. Embarking on ambitious projects in space might ‘lock in’ decisions that turn out to be misguided. Plausibly, we should therefore make time for a period of reflection before embarking on potentially irreversible projects to spread through space.
Furthermore, without any forethought or governance, humanity’s long-run future in space might become a kind of uncoordinated ‘free for all’ — where the most expansionist groups eventually dominate. Like extinction, this kind of fragmented future could be a kind of lock-in — harder to escape from than to enter into.
This suggests we should try to research ways to make sure that grand projects in space can be changed or reversed if it becomes clear they’re heading in a bad direction.
[...]
Avoiding weaponised asteroid deflection
When thinking about risks from space, you’ll likely think of asteroids.
[Fortunately, compared to other existential threats this century,] the risk from asteroids this century appears to be very low[16][, and unusually well managed — Toby Ord writes in The Precipice: “no other existential risk is as well handled as that of asteroids or comets”.]
But if we do develop asteroid defence systems, we should also handle them carefully: any technology capable of deflecting an asteroid away from a collision course with Earth will make it easier to divert it toward Earth [...] As Carl Sagan and Steven J. Ostro write: “premature deployment of any asteroid orbit-modification capability [...] may introduce a new category of danger that dwarfs that posed by the objects themselves.”
To address this worry, actors might:
- Agree on a monitoring network for asteroid deflection.
- Regulate technology with the potential to divert objects in space.
- Consider mandating liability insurance (similar to proposals in the context of risky biological research).
For more detail, see this longer post about risks from asteroids [also on the EA Forum].
Setting up rules for space debris to keep low-Earth orbit usable
Most of the roughly 6,000 satellites in low-Earth orbit are no longer operational — they have become fast-travelling pieces of junk. Smaller pieces of debris are created when flecks of paint come loose, when derelict spacecraft fragment into small pieces, or when particles of fuel are expelled from rocket motors.[18]
The result is a cloud of orbital debris, each piece flying through space around 10 times faster than a bullet. Space debris is already making space missions costlier and more risky,[19] and the number of satellites in orbit is set to more than triple by 2028.
The dangers posed by orbital debris are mostly negative externalities — like how dumping chemical waste in a river affects not just you, but everyone downstream. In such cases, governance could impose clearer incentives to clear up debris — and develop the tech to do so.
Relatedly, there is no international authority both monitoring and enforcing any traffic regulations in orbit. There aren’t major reasons why this can’t be established soon — everyone would benefit from having some rules that reduce collisions (as we’ve seen in civil aviation). Efforts to make sure this gets implemented well could be valuable.
Figuring out how to distribute resources and property
It might be worthwhile to begin thinking about mechanisms for deciding who owns what in space.
Failing to have clear rules in advance could encourage risky and competitive behaviour, as players race to grab space and resources in the absence of any sort of governance. For instance, we could eventually mine resources from asteroids and the Moon. Mandating that nearly all resources be shared will leave little to no incentive to reach them in the first place, but some clear rules for how to distribute especially large ‘windfalls’ of wealth (as has been suggested for AI development) could be good.
What are the major arguments against this problem being pressing?
Abbreviated sections —
- Maybe we should wait before locking in decisions about space
- It could be very hard to influence
- Maybe we don't need strict space governance
Early efforts could be washed out later
From a longtermist perspective, the strongest case for space governance may be the idea that through early action, we can positively influence how space ends up being governed in a long-lasting way, or make sure the wrong values aren't locked in.
But it could be very likely that early governance initiatives simply get ‘washed out’ by later decisions, such that the early work ends up having little influence on the way things eventually turn out. And the further away you think serious efforts at permanently settling space are, the more likely this seems. Ultimately, we’re not sure exactly how long to expect early efforts to last.
You don’t need to think the likelihood of significant and very long-lasting effects is zero to think you shouldn't work on space governance — just that the chance of washout makes the case significantly weaker than other pressing problems.
[...]
Positively influencing the arrival of transformative AI could be much more important
If some views about advanced AI are right, it could be much more pressing to work on making sure AI aligns with the right values — in part because it looks like the highest-stakes space scenarios (e.g. those involving rapid settlement) are most likely to involve advanced AI.
After the arrival of very powerful AI, problems in space governance could look very different. Perhaps the political order will have changed, or new space technologies will emerge very quickly.
Further, the arrival of transformative AI could cause wide-reaching social and governance change. This could make it especially likely that early work on space governance gets washed out.
Finally, you might reasonably expect transformative AI to arrive sooner than successful projects to build or settle widely beyond Earth, since these projects look very difficult without the kind of sophisticated and widespread automation of engineering that transformative AI would enable.
If this story is right, then it might be more worthwhile to work on positively shaping the development of AI instead.
- ^
One reason this might be important to control is that weapons launched from space can arrive with less warning, which narrows the window to respond to provocations (thereby reducing deterrence) and increases the likelihood of false alarms.
- ^
Note that if humanity eventually settles widely in space, it seems very likely that almost everyone will live in artificial structures rather than on planetary surfaces.
- ^
I've used "[...]" to indicate where I have cut sections from the original problem profile.
- ^
However, note that constitutions do not tend to last nearly as long as the Constitution of the United States. So you should probably start off very sceptical that a ‘space constitution’ written today will survive long enough to matter.
- ^
This scenario seems most likely if advanced artificial intelligence dramatically speeds up the rate of technological progress.
- ^
Holden Karnofsky writes about the possibility of “digital people” here.
- ^
For transparency: Karnofsky is the chief executive officer of the Open Philanthropy Project, which is a major donor to 80,00 Hours as well as the Future of Humanity Institute, where I (Fin Moorhouse) currently work.
- ^
Another promising idea is to use (internationally operated) satellites to help verify arms control agreements, such as prohibitions on the use of anti-satellite weapons in outer space. This is the central idea of the Canadian PAXSAT proposal.
- ^
The primary focus of the Outer Space Treaty (OST) is arms control: it bars parties to the treaty from placing weapons of mass destruction anywhere in outer space, prohibits military testing or manoeuvres of any kind, and prohibits establishing permanent military bases. The other focus of the OST is on questions of claiming territory and expropriating resources. Article II states: “Outer space, including the moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means.” But the term “national appropriation” isn’t defined in the treaty. In particular, the OST is ambiguous over whether resources from celestial bodies can be appropriated by non-state actors. Article I states that the “use of outer space [...] shall be carried out for the benefit and in the interests of all countries,” but this alone adds little clarity. The OST does state that non-governmental entities “shall require authorization and continuing supervision by the appropriate State Party to the Treaty”.
- ^
The Moon Agreement of 1979 set out to establish clearer and more demanding guidelines around using resources on the Moon and other celestial bodies, calling for an international regime to “govern the exploitation of the natural resources of the moon as such exploitation is about to become feasible.” But the agreement stipulated that resources appropriated from space shall be the “common heritage of mankind.” Though also left ambiguous, this clause too strongly suggested a regime where rewards must be fully shared. As such, no major spacefaring nation has ratified the Moon Agreement.
- ^
The other major pieces of international space law are the Rescue Agreement, the Liability Convention, and the Registration Convention.
- ^
See also: Global Space Budgets – A Country-level Analysis and Global government space budgets continues multiyear rebound (a commentary on Euroconsult’s Government Space Programs 2019 report).
- ^
This rapid projected growth is likely to be driven in the short term by demand for satellite infrastructure, especially for providing internet access. Today, the satellite industry makes up more than 75% of the space economy (and commercial human spaceflight much less than 1%).
- ^
You can read a summary for an audience interested in existential risk here.
- ^
One example could be space laser weapons: destructive lasers beamed through space — either attached to satellites and aimed at ground targets, or vice-versa — capable of disabling reconnaissance satellites or intercontinental ballistic missiles mid-flight. Such capabilities could be destabilising, because they could increase the chance of a preemptive attack against the country that developed them — a worry that was raised as early as 1988, but could remain relevant. But lasers are just an illustrative example of the point: that falling costs to access space could open up the possibility of new kinds of weapons technology — some potentially destabilising — suggesting we should consider preemptive arms control for those technologies.
- ^
See also Toby Ord, The Precipice (2020) p. 71, Table 3.1.
- ^
This ‘dual-use’ concern mirrors other kinds of projects aimed at making us safer, but which pose their own risks, such as gain-of-function research on diseases.
- ^
Debris roughly a millimetre in diameter represents the greatest mission-ending risk to most satellites in low-Earth orbit. This is because even tiny pieces of debris are travelling fast enough to cause serious damage, and most pieces of debris are very small.
- ^
In 2021, a piece of space debris left a hole in a robotic arm attached to the International Space Station (ISS). A few months later, the ISS swerved to avoid a fragment of a US launch vehicle.
- ^
Such as ‘existential risk,’ as in these remarks of the Secretary General in 2021.
Hi! I’m an aerospace engineer at the bay-area startup Xona Space Systems & a big fan of Effective Altruism. Xona basically works on creating a next-generation, commercial version of GPS. Before that I helped build, launch, and operate a pair of cubesats at a small company called SpaceQuest, and before that I got a masters’ degree at CU Boulder. I’ve also been a longtime fan of SpaceX, kerbal space program, and hard sci-fi.
I think this is a good writeup that does a pretty good job of disentangling many of the different EA-adjacent ideas that touch on aerospace topics. In this comment I will talk about different US government agencies and why I think US policy is probably the more actionable space-governance area than broad international agreements; hopefully I’ll get around to writing future comments on other space topics (about the Long Reflection, the differences between trying to influence prosaic space exploration vs Von Neumann stuff, about GPS and Xona Space Systems, about the governance of space resources, about other areas of overlap between EA and space), but we'll see if I can find the time for that...
Anyways, I’m surprised that you put so much emphasis on international space agreements through the UN[1], and relatively little on US space policy. Considering that the USA has huge and growing dominance in many space areas, it’s pretty plausible that US laws will be comparably influential to UN agreements even in the long-term future, and certainly they are quite important today. Furthermore, US regulations will likely be much more detailed / forceful than broad international agreements, and US space policy might be more tractable for at least American EAs to influence. For example, I think that Artemis Accords (signed by 19 countries so far, which represent 1601 of the 1807 objects launched into space in 2021) will probably be more influential at least in the near-term than any limited terms that the upcoming UN meeting could get universal agreement on — the UN is not about to let countries start claiming exclusive-economic-zone-esque territory on other planets, but the Artemis Accords arguably does this![2]
With that in mind, here is an incomplete list of important space-related US agencies and what they do. Some of these probably merit inclusion in your list of “key organizations you could work for”:
In a similar spirit of “paying attention to the concrete inside-view” and recognizing that the USA is by far the leader in space exploration, I think it’s further worth paying attention to the fact that SpaceX is very well-positioned to be the dominant force in any near-term Mars or Moon settlement programs. Thus, influencing SpaceX (or a handful of related companies like Blue Origin) could be quite impactful even if this strategy doesn’t feel as EA-ish as doing something warm and multilateral like helping shape a bunch of EU rules about space resources:
Universal UN treaties, like those on nuclear nonproliferation and bioweapons, seem best for when you are trying to eliminate an x-risk by getting universal compliance. Some aspects of space governance are like this (like stopping someone from launching a crazy von neumann probe or ruining space with ASAT attacks), but I see a many space governance issues which are more about influencing the trajectory taken by the leader in space colonization (ie, SpaceX and the USA). Furthermore, many agreements on things like ASAT could probably be best addressed in the beginning with bilateral START-style treaties, hoping to build up to universal worldwide treaties later.
The Accords have deliberately been pitched as low-key thing, like “hey, this is just about setting some common-sense norms of cooperation and interoperability, no worries”, but the provisions about in-space resource use and especially the establishment of “safety zone” perimeters around nation’s launch/landing sites, is in the eyes of many people basically opening the door towards claiming national territory on celestial bodies.
The process of getting spectrum is currently the riskiest and most onerous part of most satellite companies’ regulatory-approval journeys. Personally, I think that this process could probably be much improved by switching out the current paperwork-and-stakeholder-consultation-based system for some fancy mechanism that might involve auctioning self-assessed licenses or something. But fixing the FCC’s spectrum-licensing process is probably not super-influential on the far-future, so whatever.
This (and your other comments) is incredibly useful, thanks so much. Not going to respond to particular points right now, other than to say many of them stick out as well worth pursuing.
I feel like the discussion of AI is heavily underemphasized in this problem profile (in fact, in this post it is the last thing mentioned).
I used to casually think "sure, space governance seems like it could be a good idea to start on soon; space exploration needs to happen eventually, I guess," but once I started to consider the likelihood and impact of AI development within the next 200 or even ~60 years, I very heavily adjusted my thinking towards skepticism/pessimism.
That question of AI development seems like a massive gatekeeper/determinant to this overall question: I'm unclear how any present efforts towards long-term space governance and exploration matter in the case where AI 1) is extraordinarily superintelligent and agentic, and 2) operates effectively as a "singleton" -- which itself seems like a likely outcome from (1).
Some scenarios that come to my mind regarding AI development (with varying degrees of plausibility):
Ultimately, I'd really like to see:
This sentiment seems like a fully general objection to every intervention not directly related to AI safety (or TAI).
As presented currently, many TAI or AI safety related scenarios blow out all other considerations—it won't matter how far you get to Alpha Centauri with prosaic spaceships, TAI will track you down.
It seems like you would need to get "altitude" to give this consideration proper thought (pardon the pun). My guess is that the OP has done that.
This is partially an accurate objection (i.e., I do think that x-risks and other longtermist concerns tend to significantly outweigh near-term problems such as in health and development), but there is an important distinction to make with my objections to certain aspects of space governance:
Contingent on AI timelines, there is a decent chance that none of our efforts will even have a significantly valuable near-term effect (i.e., we won't achieve our goals by the time we get AGI). Consider the following from the post/article:
Suppose that it would take ~80 years to develop meaningful self-sustaining settlements on Mars without AGI or similar forms of superintelligence. But suppose that we get AGI/superintelligence in ~60 years: we might get misaligned AGI and all the progress (and humanity) is erased and it fails to achieve its goals; we might create aligned AGI which might obsolesce all ~60 years of progress within 5 or so years (I would imagine even less time); or we might get something unexpected or in between, in which case maybe it does matter?
In contrast, at least with health and development causes you can argue "I let this person live another ~50 years... and then the AGI came along and did X."
Furthermore, this all is based on developing self-sustaining settlements being a valuable endeavor, which I think is often justified with ideas that we'll use those settlements for longer-term plans and experimentation for space exploration, which requires an even longer timeline.
Thanks for this, I think I agree with the broad point you're making.
That is, I agree that basically all the worlds in which space ends up really mattering this century are worlds in which we get transformative AI (because scenarios in which we start to settle widely and quickly are scenarios in which we get TAI). So, for instance, I agree that there doesn't seem to be much value in accelerating progress on space technology. And I also agree that getting alignment right is basically a prerequisite to any of the longer-term 'flowthrough' considerations.
If I'm reading you right I don't think your points apply to near-term considerations, such as from arms control in space.
It seems like a crux is something like: how much precedent-setting or preliminary research now on ideal governance setups doesn't get washed out once TAI arrives, conditional on solving alignment? And my answer is something like: sure, probably not a ton. But if you have a reason to be confident that none of it ends up being useful, it feels like that must be a general reason for scepticism that any kind of efforts at improving governance, or even values change, are rendered moot by the arrival of TAI. And I'm not fully sceptical about those efforts.
Suppose before TAI arrived we came to a strong conclusion: e.g. we're confident we don't want to settle using such-and-such a method, or we're confident we shouldn't immediately embark on a mission to settle space once TAI arrives. What's the chance that work ends up making a counterfactual difference, once TAI arrives? Notquite zero, it seems to me.
So I am indeed on balance significantly less excited about working on long-term space governance things than on alignment and AI governance, for the reasons you give. But not so much that they don't seem worth mentioning.
This seems like a reasonable point, and one I was/am cognisant of — maybe I'll make an addition if I get time.
(Happy to try saying more about any of above if useful)
That is mostly correct: I wasn't trying to respond to near-term space governance concerns, such as how to prevent space development or space-based arms races, which I think could indeed play into long-term/x-risk considerations (e.g., undermining cooperation in AI or biosecurity), and may also have near-term consequences (e.g., destruction of space satellites which undermines living standards and other issues).
To summarize the point I made in response to Charles (which I think is similar, but correct me if I'm misunderstanding): I think that if an action is trying to improve things now (e.g., health and development, animal welfare, improving current institutional decision-making or social values), it can be justified under neartermist values (even if it might get swamped by longtermist calculations). But it seems that if one is trying to figure out "how do we improve governance of space settlements and interstellar travel that could begin 80–200 years from now," they run the strong risk of their efforts having effectively no impact on affairs 80–200 years from now because AGI might develop before their efforts ever matter towards the goal, and humanity either goes extinct or the research is quickly obsolesced.
Ultimately, any model of the future needs to take into account the potential for transformative AI, and many of the pushes such as for Mars colonization just do not seem to do that, presuming that human-driven (vs. AI-driven) research and efforts will still matter 200 years from now. I'm not super familiar with these discussions, but to me this point stands out so starkly as 1) relatively easy to explain (although it may require introductions to superintelligence for some people); 2) substantially impactful on ultimate conclusions/recommendations, and 3) frequently neglected in the discussions/models I've heard so far. Personally, I would put points like this among the top 3–5 takeaway bullet points or in a summary blurb—unless there are image/optics reasons to avoid doing this (e.g., causing a few readers to perhaps-unjustifiably roll their eyes and disregard the rest of the problem profile).
This is an interesting point worth exploring further, but I think that it's helpful to distinguish—perhaps crudely?—between two types of problems:
It seems to me that an aligned superintelligence would very likely be able to obsolesce every effort we make towards the first problem fairly quickly: if we can design a human-aligned superintelligent AI, we should be able to have it automate or at least inform us on everything from "how do we solve this engineering problem" to "will colonizing this solar system—or even space exploration in general—be good per [utilitarianism/etc.]?"
However, making sure that humans care about other extra-terrestrial civilizations/intelligence—and that the developers of AI care about other humans (and possibly animals)—might require some preparation such as via moral circle expansion. Additionally, I suppose it might be possible that a TAI's performance on the first problem is not as good as we expect (perhaps due to the second problems), and of course there are other scenarios I described where we can't rely as much on a (singleton) superintelligence, but my admittedly-inexperienced impression is that such scenarios seem unlikely.
Hi Fin!
This is great. Thank you for writing it up and posting it! I gave it a strong upvote.
(TLDR for what follows: I think this is very neglected, but I’m highly uncertain about tractability of formal treaty-based regulation)
As you know, I did some space policy-related work at a think tank about a year ago, and one of the things that surprised us most is how neglected the issue is — there are only a handful of organizations seriously working on it, and very few of them are the kinds of well-connected and -respected think tanks that actually influence policy (CSIS is one). This is especially surprising because — as Jackson Wagner writes below — so much of space governance runs through U.S. policy. Anyway, I think that’s another point in favor of working on this!
As I think I mentioned when we talked about space stuff a little while ago, I’m a bit skeptical about tractability of “traditional” (ie formal, treaty-based) arms control. You note some of the challenges in the 80K version of the write up. Getting the major powers to agree to anything right now, let alone something as sensitive as space tech, seems unlikely. Moreover, the difficulties of verification and ease of cheating are high, as they are with all dual-use technology. Someone can come up with a nice “debris clean up” system that just happens to also be a co-orbital ASAT, for example.
But I think there are other mechanisms for creating “rules of the orbit” — that’s the word Simonetta di Pippo, the director of UNOOSA used at a workshop I helped organize last year. (https://global.upenn.edu/sites/default/files/perry-world-house/Dipippo_SpaceWorkshop.pdf)
Cyber is an example where a lot of actors have apparently decided that treaty-based arms control isn’t going to cut it (in part for political reasons, in part because the tech moves so fast), but there are still serious attempts at creating norms and regulation (https://carnegieendowment.org/2020/02/26/cyberspace-and-geopolitics-assessing-global-cybersecurity-norm-processes-at-crossroads-pub-81110). That includes standard setting and industry-driven processes, which feel especially appropriate in space, where private actors play such an important role. We have a report on autonomous weapons and AI-enabled warfare coming out soon at Founders Pledge, and I think that’s another space where people put too much emphasis on treaty-based regulation and neglect norms and confidence building measures for issues where great powers can agree on risk reduction.
Again, I think this is a great write up, and love that you are drawing attention to these issues. Thank you!
Following up my earlier comment with a hodgepodge of miscellaneous speculations and (appropriately!) leaving the Long Reflection / Von-Neumann stuff for later-to-never. Here are some thoughts, arranged from serious to wacky:
I forgot from where, but I've heard criticisms of Elon Musk that he is advancing our expansion into space while not solving many of Earth's current problems. It seems logical that if we still have many problems on Earth, such as inequity, that those problems will get perpetuated as we expand into space. Also, maybe it's possible that other smaller scale problems that we don't have effective solutions for would become enormously multiplied as we expand into space (though I am not sure what an example of this would be). On the other hand, maybe the development of space technology will be the means through which we stumble onto solutions to many of the problems that we currently have on Earth.
Getting along with any possible extraterrestrial civilizations would be a concern.
Use of biological weapons might be more attractive because the user can unleash them on a planet and not worry about it spilling over to themselves and their group.
A state, group, or individual might stumble upon a civilization and wipe them out. They would prevent anyone else from even knowing they existed.
A stray thought; I'll stumble to the Google Doc with it in a moment - regarding minimal standards of operation for space colonies' constitutions: "If a government does not allow its people to leave, that is what makes it a prison."