No longer endorsed.
Imagine it's 2030 or 2040 and there's a catastrophic great power conflict. What caused it? Probably AI and emerging technology, directly or indirectly. But how?
I've found almost nothing written on this. In particular, the relevant 80K and EA Forum pages don't seem to have relevant links. If you know of work on how AI might cause great power conflict, please let me know. For now, I'll start brainstorming. Specifically:
- How could great power conflict affect the long-term future? (I am very uncertain.)
- What could cause great power conflict? (I list some possible scenarios.[1])
- What factors increase the risk of those scenarios? (I list some plausible factors.)
Epistemic status: brainstorm; not sure about framing or details.
I. Effects
Alternative formulations are encouraged; thinking about risks from different perspectives can help highlight different aspects of those risks. But here's how I think of this risk:
Emerging technology enables one or more powerful actors (presumably states) to produce civilization-devastating harms, and they do so (either because they are incentivized to or because their decisionmaking processes fail to respond to their incentives).[2]
Significant (in expectation) effects of great power conflict on the long-term future include:
- Risk of human extinction
- Risk of civilizational collapse
- Effects on states' relative power
- Other effects on the time until superintelligence and the environment in which we achieve superintelligence
Human extinction would be bad. Civilizational collapse would be prima facie bad, but its long-term consequences are very unclear. Effects on relative power are difficult to evaluate in advance. Overall, the long-term consequences of great power conflict are difficult to evaluate because it is unclear what technological progress and AI safety look like in a post-collapse world or in a post-conflict, no-collapse world.
Current military capabilities don't seem to pose a direct existential risk. More concerning for the long-term future are future military technologies and side effects of conflict, such as on AI development.
II. Causes
How could AI and the technology it enables lead to great power conflict? Here are the scenarios that I imagine, for great powers called "Albania" and "Botswana":
- Intentional conflict due to bilateral tension. In each of these scenarios, international hostility and fear are greater than in 2021, and domestic politics and international relations are more confusing and chaotic.
- Preventive attack. Albania thinks that Botswana will soon become much more powerful and that this would be very bad. Calculating that it can win—or accepting a large chance of devastation rather than simply letting Botswana get ahead—Albania attacks preemptively.
- Seizing opportunity. An arms race is in progress. Albania thinks it has an opportunity to get ahead. Albania attempts to strike or sabotage Botswana's AI program or its military. Albania does not disable Botswana's military (either because it failed to or because it assumed Botswana would not launch a major counterattack anyway). Botswana retaliates.
- Diplomatic breakdown. Albania makes a demand or draws a line in the sand (legitimately, from its perspective). Botswana ignores it (legitimately, from its perspective). Albania attacks. Possible demands include, among others: stop building huge AI systems (and submit to external verification), or stop developing technology that threatens a safe first strike (and submit to external verification).
- Intentional conflict due to a single state's domestic political forces. These scenarios are currently difficult to imagine among great powers. But some researchers are worried about polarization and epistemic decline in the near future, which could increase this risk.
- Ambition. Albania hopes to dominate other states. Albania attacks.
- Hatred. A substantial fraction of Albanians despise Botswana, and the Albanian government's decisionmaking process empowers that faction. Albania attacks.
- Blame. Albania suffers an attack, leak, security breach, or embarrassment from one or more malcontents/spies/saboteurs/assassins/terrorists. Albania incorrectly blames Botswana — for rational reasons, for political convenience, or just due to bad epistemics. Albania attacks.
- Intentional conflict due to multi-agent forces. This scenario is currently difficult to imagine. But perhaps crazy stuff happens when power increases, relative power is unstable, technology confuses states, and memetic chaos reigns. Roughly, I imagine a multi-agent failure scenario like this:
- Offense outpaces defense. New technologies are leaked, are developed independently by many states, or cannot be kept secret. The capability to devastate civilization, which in 2021 was restricted to the major nuclear states, is held by many states. Even if none are malevolent, all are afraid, and domestic political forces (which are more chaotic than they were in 2021) make one or two states do crazy stuff.
- An accident. "If the Earth is destroyed, it will probably be by mistake."[3]
- Automatic counterattacks. AI, AI-enabled military technology, and the prospect of future advances foster chaos and uncertainty. International tension increases in general, and tension between Albania and Botswana increases in particular. Offensive capabilities increase and are on hair trigger.[4] Eventually there's an accident, miscommunication, glitch, or some anomaly resulting from multiple complex systems interacting faster than humans can understand. Albania automatically launches a "counterattack."
III. Risk factors
Great power conflict is generally bad, and we can list high-level scenarios to avoid, such as those in the previous section. But what can we do more specifically to prevent great power conflict?
Off the top of my head, risk factors for the above scenarios include:
- International cooperation/trust/unity/comity decreases (in general or between particular great powers)[5]
- Fear about other states' capabilities and goals increases (in general or between particular great powers)
- Chaos increases
- States' relative power is in flux and uncertain
- There is conflict (that could escalate), especially international violence or conquest, especially involving a great power (e.g., a great power annexes territory, or there is a proxy war)
- More states acquire devastating offensive capabilities beyond the power of any defensive capabilities (this needs nuance but is prima facie generally true)[6]
It also matters what and how regular people and political elites think about AI and emerging technology. Spreading better memes may be generally more tractable than reducing the risk factors above, because it's pulling the rope sideways, although the benefits of better memes are limited.
Finally, the same forces from emerging technology, international relations, and beliefs and modes of thinking about AI that affect great power conflict will also affect:
- How quickly superintelligence is developed
- The extent to which there is an international arms race
- Regulations and limits on AI, locally and globally
- Hardware accessibility
Interventions affecting the probability and nature of great power conflict will also have implications for these variables.
Please comment on what should be added or changed, and please alert me to any relevant sources you've found useful. Thanks!
My analysis is abstract. Consideration of more specific factors, such as what conflict might look like between specific states or involving specific technologies, is also valuable but is not my goal here. ↩︎
Adapted from Nick Bostrom's Vulnerable World Hypothesis, section "Type-2a." My definition includes scenarios in which a single actor chooses to devastate civilization; while this may not technically be great power conflict, I believe it is sufficiently similar that its inclusion is analytically prudent. ↩︎
Eliezer Yudkowsky's Cognitive Biases Potentially Affecting Judgment of Global Risks. ↩︎
Future weapons will likely be on hair trigger for the same reasons that nukes have been: swifter second strike capabilities could help states counterattack and thus defend themselves better in some circumstances, it makes others less likely to attack since the decision to use hair trigger is somewhat transparent, and there is emotional/psychological/political pressure to take them down with us. ↩︎
Currently the world doesn't include large, powerful groups, coordinated at the state level, that totally despise and want to destroy each other. If it ever does, devastation occurs by default. ↩︎
Another potential desideratum is differential technological progress. Avoiding military development is infeasible to do unilaterally, but perhaps we can avoid some particularly dangerous capabilities or do multilateral arms control. Unfortunately, this is unlikely: avoiding certain technologies is costly because you don't know what you'll find, and effective multilateral arms control is really hard. ↩︎
Phrases to look for include "accidental escalation" or "inadvertent escalation" or "strategic stability," along with "AI" or "machine learning." Michael Horowitz and Paul Scharre have both written a fair bit on this, e.g. here.
Thank you!
Even without new technological development, why couldn't there be a great power war over a classical Flashpoint, like what caused past wars? Seems like a war over disputed territories in the seas near China, or disputed territories between India and Pakistan could plausibly cause a great power war.
It's certainly possible, and I think such analysis is valuable. It's just not my comparative advantage and not so neglected (I think). Also, I think we don't lose much analytically by separating foreseeable causes of great power conflict into two distinct categories:
This post aims to start a conversation on 2 — or get people to direct me to previous work on 2.
Also to explain my focus, I would be surprised by major conflict for normal reasons by 2040 but not surprised by major conflict because the world is going crazy by 2040. But I didn't justify this. I should have mentioned my exclusion of major conflict for normal reasons in my post; thanks for your comment.
Thucydide‘s Trap by Graham Allison features a scenario of escalating conflict between the US and China in the South Chinese Sea conflict that I found very chilling. Iirc the scenario is just like you mentioned, each side doing from her perspective legitimate moves, protecting dearly hold interests, drawing lines in the sand and the outcome is escalation to war. The underlying theme is conflicting dynamics when a reigning power is challenged by a rising power. You probably saw the book mentioned, I found it very worth reading.
And you didn‘t mention cyber warfare, which is what pops into my mind immediately. I haven‘t looked into this, but I imagine that potential damage is very high while proper international peace-supporting and deescalating norms are much more lagging behind compared to physical conflicts.
Thanks for your comment. US-China tension currently seems most likely to me to cause great power conflict, and cyber capabilities were mostly what I had in mind for "offense outpaces defense" scenarios. I think this post is more valuable if it's more general, though, and I don't know enough about US-China, cyber capabilities, or warfare to say much more specifically.
I think understanding possible futures of cyber capabilities would be quite valuable. I would not be surprised to look back in 2030 or 2040 and say:
But again, such work is not my comparative advantage (and, as a disclaimer for the above paragraph, I don't know what I'm talking about).
From the same reference, twelve out of 16 times that there has been a switch in which is the most militarily powerful country in the world, there has been war (though one should not take that literally for the current situation). China will likely become the most powerful (economically at least) in the next few decades, unless the US allows a lot more immigration.