Hide table of contents

Summary/Introduction

Aschenbrenner’s ‘Situational Awareness’ (Aschenbrenner, 2024) promotes a dangerous narrative of national securitisation. This narrative is not, despite what Aschenbrenner suggests, descriptive, but rather, it is performative, constructing a particular notion of security that makes the dangerous world Aschenbrenner describes more likely to happen.

This piece draws on the work of Nathan A. Sears (2023), who argues that the failure to sufficiently eliminate plausible existential threats throughout the 20th century emerges from a ‘national securitisation’ narrative winning out over a ‘humanity macrosecuritization narrative’. National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty. Sears uses a number of examples to show that when issues are constructed as issues of national security, macrosecuritization failure tends to occur, and the actions taken often worsen, rather than help, the issue.

This piece argues that Aschenbrenner does this. Firstly, I explain (briefly and very crudely) what securitisation theory is and how it explains the constructed nature of security. Then, I explain Sears (2023)’s main thesis on why Great Powers fail to combat existential threats. This is followed by an explanation of how Aschenbrenner’s construction of security seems to be very similar to the most dangerous narratives examined by Sears (2023) by massively favouring national security. Given I view his narrative as dangerous, I then discuss why we should care about Aschenbrenner’s project, as people similar to him have been impactful in previous securitisations. Finally, I briefly discuss some more reasons why I think Aschenbrenner’s project is insufficiently justified, especially his failure to adequately consider a pause, the fact he is overly pessimistic about international collaboration whilst simultaneously overly optimistic that AGI wouldn’t lead to nuclear war, .

There is lots I could say in response to Aschenbrenner, and I will likely be doing more work on similar topics. I wanted to get this piece out fairly quickly, and it is already very long. This means some of the ideas are a little crudely expressed, without some of the nuances thought out; this is an issue I hope future work will address. This issue is perhaps most egregious in Section 1, where I try to explain and justify securitisation theory very quickly, and if you want a more nuanced, in depth and accurate description of securitisation theory, (Buzan, Wæver and de Wilde, 1998) is probably the best source.

Section 1- What is securitisation

Everything we care about is mortal. Ourselves, our families, our states, our societies and our entire species. Threats can come that threaten each of these. In response, we allow, and often expect, extraordinary measures to be taken to combat them. This takes different forms for different issues, with the measures taken, and the audience they must be legitimised to, different. With COVID, these measures involved locking us in our homes for months. With Islamic terrorism, these involved mass surveillance and detention without trial. With the threat of communism in Vietnam, these involved going to war. In each of these cases, and countless others, it can be considered that the issue has been ‘securitised’; they entered into a realm where extraordinary measures can be justified in order to ensure survival against a perceived existential threat.

In each of these examples, however, this was never inherent. Many diseases have failed to be securitised, and life has carried on as normal; indeed, many would reject that the sacrifices we made for COVID were even worth it. White nationalist terrorism in the USA never provoked the same level of surveillance and police response as Islamic terrorism. We might dispute that these threats were ever existential to the referent object; indeed, Vietnam turned communist, and within three decades, America had won the Cold War. Nonetheless, in each of these examples, and more, from the US invasion of Iraq to the Chechen wars to the treatment of illegal migration, issues have been elevated to a standard of ‘existential threat’. This allows them to gain precedence over other issues, and extraordinary measures that are rarely justified are suddenly acceptable, or perhaps even seen as entirely necessary; the normal rules of politics gets broken.

These lists of examples have hopefully highlighted how diverse ‘security’ issues can be, and the fact that what are matters of security is constructed, rather than objective. A toy example may further help understand this point. Imagine if country A builds a coal power plant near the border of country B, that on average kills 10 of country Bs citizens yearly from air pollution. The idea that country B bombing the powerplant is an expected response would be considered insane. However, if country A fired a missile into country B and killed 5 citizens, bombing the facility that launched the missile appears to be a live possibility. This highlights that we cannot simply take for granted what are matters of security, in this case, something that kills 5 citizens could be considered more of a security threat than something that kills 10 citizens. Rather, we must explain how issues are constructed as matters of security, and what the impact of that construction may be.

Securitisation theory tries to describe and explain this process. An issue becomes securitised when it is declared an existential threat to a referent object by a securitising actor. This referent object is in many cases, the state (such as in the routinely securitised military sector), but can be more diverse than that, including, most relevant to this, cases of macrosecuritization. This is when political units at a higher level than the state are the referent object of security. This could be the defence of a particular civilisation (eg defence of the West, of socialism, of Islam etc), or even, in the case we will discuss, the defence of all humanity. For more information on macrosecuritization, see (Buzan and Wæver, 2009).  The securitising actor is an actor with the authority (in the eyes of the audience) to carry out a successful securitising speech act. This existence of the existential threat would then justify or demand the carrying out of extraordinary measures, beyond the normal rules of politics, that provide a ‘way out’ from this threat. For this move to be successful, however, it needs to be accepted as so by the relevant audience that the justification is required to; thus, securitisation is intersubjective. If relevant actors perceive something as a matter of security by the carrying out of this speech act, that thing is securitised, and extraordinary measures that may have been impossible to legitimate before, become legitimate and perhaps even demanded. I don’t wish to deny that the material world plays a role in how easy it is to be securitised, but ultimately the social construction of the issue as one of security is what is decisive, and this is done by the securitising speech act.

Section 2: Sears 2023 - The macrosecuritization of Existential Threats to humanity

Sears (2023) attempts to assess previous examples of dealing with threats that may be perceived as ‘existential’ to humanity as a whole. After all, given security focuses on survival, it's odd how neglected existential threats to all of humanity have been under the analysis of securitisation theory, and odd how neglected securitisation theory has been in XRisk studies. Thus, Sears attempts to examine the empirical record to understand how, if at all, plausible existential threats to humanity have been securitised and how this links to whether effective action has been taken. The examples Sears uses are: International Control of Atomic Energy, Proliferation of nuclear weapons, biological weapons, ozone hole, nuclear winter, global warming, prohibition of nuclear weapons, artificial intelligence, climate emergency, biodiversity loss. In each of these cases, Sears looks at attempts to carry out ‘macrosecuritization’ where these issues are constructed as existential threats to the whole of humanity which requires extraordinary measures to defend all of humanity from the threats. However, as discussed, securitization does not always, or even mostly, succeed, and Sears in particular focuses on why the international community often fails to take these threats as seriously as we might think they ought to.

Sears sees each case of ‘macrosecuritization failure’ occurring in cases where the Great Powers fail to reach consensus that there exists an existential threat to all of humanity that demands extraordinary measures to defend humanity, and that this is a genuine issue of security that takes precedence over other concerns. Running through each empirical example of significant macrosecuritization failure, is a failure of the ‘humanity macrosecuritization’ logic to win out over ‘national securitisation’ logic. The idea is that in each of these cases of existential threats to humanity, two securitisation narratives were at play. The first emphasised the potential for the technology or issue to pose a threat to all of humanity; humanity, not the nation, was the referent object, and thus measures needed to be taken globally to protect humanity as a whole. The second narrative was one of national securitisation, emphasising the survival of the nation, and thus extraordinary measures were needed to compete in an international power competition and a fight for supremacy. So, for example, great powers fail to come to the consensus that control over atomic energy should be internationalised and nuclear weapons decommissioned and not built, instead seeing the perceived existential threat of losing out in a possible (and self-fulfilling) arms race as more important than reducing the perceived existential threat to humanity of nuclear weapons.

Over various issues, and at various times, different aspects of these narratives gained prominence within the great powers (who serve as the chief securitising actors and audiences for macrosecuritization). The empirical record clearly leaves open the possibility of humanity macrosecuritization ‘winning out’, both by specific examples of (semi-)successful macrosecuritization (the ozone hole, nuclear winter, nuclear non-proliferation, biological weapons), and because of times where initially promising humanity macrosecuritization occurred, although it eventually failed (initial ideas around international controls of atomic energy).

National securitisation and humanity securitisation narratives are normally competitive and contrasting to each other. This is because the two modes of securitisation require different referent objects, and therefore they are always competing for who’s interest ought to take priority. Often, these interests diverge. This may be because what has typically been considered the best modes of protection for threats to national security are not those security practices that best defend against existential threats to humanity. Typically, this involves a concern around balance of power, military force and defence. These seem very different from the strategies of mutual restraint and sensible risk assessment needed to combat the risk of AGI (although, for example, the risk of nuclear war may help provide motivation for this). Thus, national securitisation shuts down most of the best options available to us (a moratorium, a single international project, even possibly responsible scaling etc), whilst delivering very little useful in return. It makes a quest for supremacy, rather than a quest for safety, the upmost priority. The ability for open questioning and reflection is massively limited, something that may be essential if we are to have AGI that is beneficial.

Definitionally, macrosecuritization failure is, according to Sears (2023) “the process whereby an actor with a reasonable claim to speak with legitimacy on an issue frames it as an existential threat to humanity and offers hope for survival by taking extraordinary action, but fails to catalyze a response by states that is sufficient to reduce or neutralize the danger and ensure the security of humankind.” Thus, if national securitization narratives winning out leads to macrosecuritization failure as Sears (2023) seems to show, definitionally it is dangerous for our ability to deal with the threat. Macrosecuritization success provides the basis for states to engage in modes of protection that are appropriate to combat existential threats to humanity, namely logics of mutual restraint.

It is also important to show there exist options that lie beyond securitisation, although national securitization reduces the possibility of these being effective. Much greater discussion is needed as to whether humanity securitisation shuts these options down or not, although generally the effect would likely be much weaker than national securitization. This is due to the differences in modes of protection, although the discussion of this is beyond the scope of this piece. Much of the existing work in the AI Safety community has focused on work that is ‘depolitical’, ‘normal politics’ or even ‘riskified’,  which the logic of securitisation stands in contrast to. The differences between these is not relevant to the argument, but securitised decision-making generally changes the sorts of decisions that can be made. Much of the work the AGI Safety community has done, from technical standards and evals, to international reports and various legal obligations, fall in these non-securitised categories. Most technologies are governed in a depoliticised or politicised way - their governance does not gain presidence over other issues, is not considered essential to survival, is not primarily interacted with by the security establishment, and is considered open for debate by people with contrasting values as the normal rules of politics expert decisionmaking are still ‘in play’. Simple solutions, that often centralise power, focused on emergency measures to quickly end the existential threat, are put in contrast to the slower paced, more prosaic approach based on a plurality of values and end states and balanced interests.  For most technologies we can carry out normal cost-benefit trade offs, rather than singularly focusing on (national) survival. This is why most technologies don't lead to extraordinary measures like an international pause or a ‘Manhattan Project’. Without any securitization, a lot of the important ideas in AGI Safety, like RSPs, could be carried out, whilst something like ‘the Project' probably couldn't be. National securitization would threaten these safety measures, as they would be a distraction to the percieved need to protect the states national security by ensuring supremacy by accelerating AI. This has often been pointed out in discussions of a ‘race’, but even without one existing ‘in reality’, once AGI supremacy is seen as essential to state survival, a ‘race’ will be on even if there is no real competitor. The possibility of slowing down later seems to run in contrast to how issues normally securitised actually function. Thus, unless there is a perception that the AGI itself is the threat (which lends itself to more humanity macrosecuritization narratives), national securitisation will lead to acceleration and threaten the viability of the most promising strategies to reduce risks from AGI. Betting on national securitisation, therefore, seems like a very dangerous bet. I should note, macrosecuritisation seems to me to be, if successful, probably safer in the long term than these alternative forms of decisionmaking. More discussion of securitisations/other logics and how these intersect with existing actions and theories of victory may be useful, but here I just wanted to point out how the priority that securitisation endangers means it directly may reduce the probability other actions can be successful.

Section 3 - How does this relate to Aschenbrenner’s ‘Situational Awareness’?

Aschenbrenner pursues an aggressively national securitising narrative. His article mentions ‘national security’ 31 times; it mentions ‘humanity’ 6 times. Even within those 6 mentions, he fails to convincingly construct humanity as the referent object of security. The closest he gets is when he says in the conclusion “It will be our duty to the free world…and all of humanity”. A more closed off, exclusionary, national macrosecuritisation (“the free world”) is even in that phrase given priority over “humanity”, which is added on as an afterthought.

Throughout the article Aschenbrenner makes a much stronger attempt to construct the referent object of security as the United States. For example, Aschenbrenner states “superintelligence is a matter of national security, and the United States must win”, which is as unambiguous statement of national securitisation as you could construct. Similarly, his so-called “AGI Realism” has three components “Superintelligence is a matter of national security” “America must lead” and “We must not screw it up”. Only the last of these gives any reference to a humanity securitisation narrative; the first two are utterly focused on the national security of the United States.

Aschenbrenner also constructs a threatening ‘Other’ that poses an existential threat to the referent object; China. This is in contrast to the more typical construction for those attempting to construct a humanity securitisation, who pose that superintelligence is itself the threatening ‘Other’. Of the 7 uses of the term ‘existential’ in the text, only 1 is unambiguously referring to the existential risk that is posed to humanity by AGI. 3 refer to the ‘existential race’ with China, clearly indicative of seeing China as the existential threat. This is even more so when Aschenbrenner states “The single scenario that most keeps me up at night is if China, or another adversary, is able to steal the automated-AI-researcher-model-weights on the cusp of the intelligence explosion”. This highlights exactly where Aschenbrenner sees the threat coming from, and the prominence he gives it. The existential threat is not constructed as the intelligence explosion itself; it is simply “China, or another adversary”.

It is true that Aschenbrenner doesn’t always see himself as purely protecting America, but the free world as a whole, and probably by his own views, this means he is protecting the whole world. He isn’t, seemingly, motivated by pure nationalism, but rather a belief that American values must ‘win’ the future. It may, therefore, be framed as a form of “inclusive universalism”, which are ideological beliefs that seek to improve the world for everyone, like liberalism, communism, Christianity, Islam. However, this doesn’t often concern itself with the survival of humanity, and fails to genuinely ‘macrosecuritise humanity’, rather in practice looking very similar to national securitisation. Indeed, some of the key examples of this, such as the Cold War, highlight how easy it is to overlap and look identical to national securitisation. So, whilst Aschenbrenner may not be chiefly concerned with the nation, his ideology will cash out in practice as this, and indeed is those who it is their chief concern that he hopes to influence.

Section 4 - Why Aschenbrenner's narrative is dangerous and the role of expert communities

It is clear then that Aschenbrenner pursues the exact same narratives that Sears argues leads us to macrosecuritization failure, and therefore a failure to adequately deal with existential threats. But my claim is further; not just is Aschenbrenner wrong to support a national securitisation narrative, but that ‘Situational Awareness’ is a dangerous piece of writing. Is this viewpoint justifiable? After all, Aschenbrenner is essentially a 23 year old with an investment firm. However, I think such confidence he doesn’t matter would be misplaced, and I think that his intellectual/political project could be, at least to an extent, impactful.  Aschenbrenner chose to take the dangerous path with little track record of positive outcomes (national securitisation), over the harder, and by no means guaranteed, but safer pathway with at least some track record of success (humanity macrosecuritisation) - the impacts of this could be profound if it gains more momentum.

Firstly, Aschenbrenner’s project is to influence the form of securitisation, so either you think its important (and therefore dangerous), or Aschenbrenner’s work is irrelevant. I do think, given the historical securitisation of AI as a technology for national supremacy (Sears, 2023), promoting the national securitisation of superintelligence may be easier than promoting the humanity securitisation. So it may be, in the words of one (anonymous) scholar who I spoke to about that, that “He had a choice between making a large impact and making a positive one. He chose to make a large impact.”

Secondly, its clear that epistemic expert communities, which the AI Safety community could clearly be considered, have played a significant role in contesting securitisations in the past. In the cases of climate change, nuclear winter and the ozone hole, for example, macrosecuritisation by expert communities has been to various degrees successful. Moreover, these communities were significant actors in the attempted macrosecuritisation of atomic energy in the 1940s, although were utterly outmanoeuvred by those who favoured national securitisation. It is notable that Aschenbrenner compares his peer group to “Szilard and Oppenheimer and Teller,” all who had significant influence over the existence of the bomb in the first place. In the cases of Szilard and Oppenheimer, they ultimately failed in their later attempts to ensure safety, and were racked by guilt. Teller is an even more interesting example for Aschenbrenner to look up to; he fervently favoured national securitisation, and wanted to develop nuclear bombs that was considered so destructive that Congress blocked his proposals. Moreover, during the debates around ‘Star Wars’, he played a clear role in getting narratives about Soviet capabilities accepted in the US national security and political apparatus that were untrue - a failure of ‘situational awareness’ driven by his own hawkishness that drove escalation and (likely) decreased existential safety (Oreskes and Conway, 2011). Perhaps Teller is an analogy that may be instructive to Aschenbrenner. It’s clear that Aschenbrenner sees the potential influence of expert communities on securitisation, but he strangely decides he wishes to be in the company of the men who’s actions around securitisation arguably ushered in the ‘time of perils’ we see ourselves in.

We have more reason to think the AI Safety community could be very impactful with regards to securitisation. Those affiliated with the community already hold positions of influence, from Jason Matheny as the CEO of Rand, to Paul Christiano at the US AISI, to members of the UK civil service and those writing for prominent media outlets. These are the potential vehicles of securitisation, and therefore it does seem genuinely plausible that humanity securitisation (or indeed, national securitisation) narratives could be successfully propagated by the AI Safety community to (at least some) major or great powers. Moreover, compared to many previous examples, the AGI Safety community seems uniquely well financed and focused compared to many of these other issues, particularly compared to the others. The nuclear experts were predominantly Physicists, who only engaged in their humanity securitisation activism after the end of the Manhattan Project, both once they had lost much of their influence and access. Others were doing it alongside their physics work, which meant the time they had was more limited, compared to the vastly better resourced opposition. Climate scientists, such as the IPCC, have also been important securitising actors as well, although their success has also been mixed. Climate scientists were, especially in the early days, vastly outspent by the ‘merchants of doubt’ (Oreskes and Conway, 2011) who acted to try and desecuritise climate change. Whilst this risk could very much exist in the case of AGI, and early indications have suggested many companies will try, I am sceptical that macrosecuritisation will be as difficult as in the climate case. Firstly, there are many features of climate change that make ‘securitisation’ especially hard (Corry, 2012)  (the long time horizons, the lack of a simple ‘way out’, the lack of immediate measures that can be taken, the need for long term, structural transition), that don’t seem to apply to AGI governance (especially if a moratorium strategy is taken). Furthermore, given the narratives that many leading AGI companies have tried to take already (i.e. very publicly acknowledging the possibility of an existential risk from their products to humanity) (Statement on AI risk, 2023), it is harder for them to desecuritise AI than it was for the oil industry to do so (although they did discover climate change, their support for it was less strong than the statements put out by the AGI companies). Finally, the impacts of extraordinary measures to limit AGI, such as a moratorium on development, impact a much smaller number of people than any extraordinary measures needed in the case of climate change.

So it seems that Aschenbrenner, whilst maybe having a greater chance of success by supporting ‘national securitisation’, has taken an action that could have plausibly very dangerous consequences. He also turned down the opportunity to have a (probably smaller) positive impact by embracing humanity securitisation. However, it is important to suggest that most of Aschenbrenner’s impact will depend on how his ideas are legitimised, supported or opposed by the AI Safety community. The role of the communities as a whole have tended to be more significant than the role of particular individuals (Sears 2023), and outside the AI Safety community its not clear how seriously Aschenbrenner is taken; for example, an Economist article about ‘situational awareness’ failed to even mention his name. Thus, the type of macrosecuritisation (or if there is any at all), is far from out of our hands yet, but it is an issue we must take seriously, which hopefully future work will explore.

The shutting of the issue, the ‘Project’ as Aschenbrenner calls it, entirely behind closed doors within the national security establishment, as national securitisation would do, makes the ability to achieve a good future much harder. In other examples, such as the case of the control of atomic energy, the initial push of the issue into the national security establishment meant that those scientists who wanted safety on the issue got sidelined (memorably seen in the film Oppenheimer). If we nationally securitise AGI, we risk losing the ability for many protective measures to be taken, and risk losing the influence of safety considerations of AI. If discussions around AGI  become about national survival, the chances we all lose massively increase. The public, in many ways, seems to take the risks from AGI more seriously than governments have, and so taking strategy ‘behind closed doors’ seems dangerous. We too quickly close down the available options to us, increasing the power of those who pose the largest danger to us at present (i.e. the AGI companies and developers), and reduce the ability for them to be held to account. This doesn’t suggest that some measures (eg restricting proliferation of model weights, better cyber-security) aren’t useful, but these could easily be carried out as part of ‘normal politics’, or even ‘depoliticised decision making’ rather than as part of ‘nationally securitised’ decision making.

One may claim with enough of a lead, the USA would have time to take these other options (as Aschenbrenner does). However, taking these other options may be very hard if AI is considered a matter of national security, where even with a lead, logics of survival and supremacy will dominate. As seen with the ‘missile gap’ during the Cold War, or the Manhattan Project after the failure of the Nazi bomb project, it is very easy for the national security establishment to perceive itself as being in a race when it isn’t in fact (Belfield and Ruhl, 2022). So in order for the advantages of a healthy lead to be reaped, de-(national)securitization would then need to happen; but for the healthy lead to happen through the ‘Project’, significant national securitisation is needed in the first instance. Moreover, If AI supremacy or at least parity is considered as essential for survival, a truly multilateral international project (like the MAGIC proposal), seems infeasible. States, having established AGI as a race, would lack the trust to collaborate with each other. The failure of the Baruch plan, for example, for (partially) these exact reasons, provides good evidence that national securitisation cannot be the basis for existential safety through collaboration, which eliminates many theories of victory as feasible. Humanity securitisation leaves all of these options open.

Section 5- The possibility of a moratorium, military conflict and collaboration

I cannot discuss every point, but there are a number of aspects that are core to Aschenbrenner’s thesis that national securitisation is the way that are worth rebutting.

Firstly, he seems to not even consider the option of pausing or slowing AI development. In ‘Situational Awareness’ this is dismissed with simply “they are clearly not the way”. He also uses his national securitisation as an argument against pausing (“this is why we cannot simply pause”), but then uses his so-called AGI realism, which is generated because pausing is “not the way” to support his national securitisation. Those who wish to argue comprehensively for such a dangerous strategy as Aschenbrenner does (ie proceeding with building a technology where we can’t pause, and build it quickly) must at least provide substantive justification for why pausing, a moratoria or a single international project aren’t possible. Aschenbrenner entirely fails to do this.

In fact, by his own model of how the future plays out, pausing may be easier than some assume. Given the very high costs involved in making AGI, it seems likely a very small number of actors can carry it out, and likely heavy national securitisation of AGI is required - this is the point of ‘Situational Awareness’. If we avoid such extreme national securitisation a moratorium may be much easier, and this wouldn’t even require strong ‘humanity macrosecuritision’ of the issue. If he is one or two orders of magnitude out with the costs of AGI, it only becomes possible with an extremely large government funded project. If getting there would be so very costly and the only way to get there is to successfully securitise AGI such that the Project takes priority over other political and economic considerations. Therefore, one may think without the strong securitisation that Aschenbrenner proposes, AGI timelines just are much longer; he essentially wants to burn up much of the existing timeline by securitising it. Moreover, without the successful national securitisation, the huge costs of AGI may make pausing seem a much lower cost than many have imagined it to be, and therefore make pausing much more plausible; all states have to do is forego a very large cost, and a danger, that they may not have wanted to invest in in the first place.  

Secondly, Aschenbrenner seems to under-appreciate the potential risks of military conflict from national securitisation of AGI, and how this impacts the possibility of collaboration. Aschenbrenner argues that superintelligence would give a ‘decisive strategic advantage’. More importantly, Aschenbrenner seems to suggest that this is ‘decisive even against nuclear deterrents’. If multiple nuclear powers appreciate the gravity of the situation, which Aschenbrenner suggests is exactly what will happen - certainly he thinks China will - then the potential for military, and even nuclear, conflict in the months or years leading up to the intelligence explosions massively increases. The lack of great power conflict post-WW2 has been maintained, at least in part, due to nuclear deterrence; if one’s adversary were seen as able to break deterrence, a preemptive strike, either using conventional or nuclear capabilities, may be seen as justified or necessary in order to prevent this. For such a military conflict to be able to be ruled out, one would suspect that the USA would have to be able to break deterrence before one of its adversaries knew the USA was anywhere near doing this. Given the very large costs and infrastructure usage involved in ‘the Project’, this to me seems unlikely unless China had no ‘situational awareness’. However, if China have no ‘situational awareness, many of Aschenbrenner’s other arguments about the necessity of racing are not viable.  According to many forms of realism, which Aschenbrenner’s arguments seem to be (crudely) based on, the chances of a first strike to prevent deterrence being broken massively increases. A war also seems to be the default outcome in the game ‘Intelligence Rising’, largely due to similar dynamics.

This also suggests to me that Aschenbrenner underestimates the possibility of collaboration. States have an interest in not breaking deterrence, due to this potential consequences, and once the danger becomes clear, collaboration seems more plausible. If states see the development of AGI as a race that can never be won because of other states responses, and see it as solely a tool for destabilisation, and thus not something that it is possible to gain advantage over. The pressures driving development would, therefore, be reduced, and this may be possible even if the dangers of rogue superintelligence was not widespread. The fact that this technology that breaks the military balance of power does not yet exist may also make negotiations easier - for example, one of the key reasons the Baruch plan failed was the fact that the US had the bomb, and the Soviets were not willing to support giving up building the bomb until the US gave up the bomb and the balance of power was restored. Given superintelligence does not yet exist, and neither side can be sure they’d win a race, it may be in both sides best interest to forego development to maintain a balance of power, and thus peace. This would also suggest that, as long as both sides surveillance of the other were good enough, they may be able to reasonably ensure against a secret ‘Project’, allowing for a more durable agreement ensured by an implicit threat of force. Notably, these ideas seem to roughly follow from my interpretation of Aschenbrenner's (underexplained) model of international politics. 

Finally, understanding security constellations may show how durable shifts away from competitive dynamics and towards a enduring moratorium may be possible. In studies of regional securitisations under higher level securitisations, such as during the Cold War, it became clear how the most powerful macrosecuritisations can impose a hierarchy on  the lower level securitisations that compose it. These rivalries were often limited due to the imposition of the macrosecuritisation over the top of it. If the threat from AGI becomes clear to governments - they gain ‘Situational Awareness’ - then a macrosecuritisation that structures national rivalries under it seem at least possible, allowing for a moratorium and collaboration. However, this requires the macrosecuritisation to be of a shared threat, and strong enough to overcome the lower-level rivalries (such as the original proposals for international nuclear control), rather than a shared construction of the other states as an existential threat (like during the Cold War). Whilst the rebuilding of the international order to protect against the threat of nuclear weapons never did occur, it certainly wasn’t impossible - Aschenbrenner, despite accepting states will see AGI as such a big deal they will divert significant percentages of their GDP, never considers that this is possible for AGI.

One objection here maybe the time we have is simply too short for this. Under Aschenbrenner’s timelines, I have some sympathy for this objection. However, it should be noted that the formulation and negotiations of the Baruch plan only took a year. Moreover, an initial, more temporary pause/slow down itself would buy time for this to exactly happen. Burning up any safety cushion we have by national securitisation reduces the chances of this coming about.

Conclusion

My analysis has been far from comprehensive and it is not a full defence of how plausible humanity macrosecuritization is, nor is it a full defence of slowing.

Nonetheless, I have argued a number of points. Aschenbrenner pursues an aggressively national securitising narrative, undermining humanity macrosecuritization. This fulfils the very criteria that Sears (2023) finds is the most conducive for macrosecuritization failure, and a failure to combat existential threats effectively. Moreover, Aschenbrenner’s narrative, if it gains acceptance, makes existing efforts to combat XRisk much less likely to succeed as well. Thus, Aschenbrenner shuts down the options available to us to combat AGI XRisk, whilst offering a narrative that is likely to make the problem worse.

Aschenbrenner fails to consider the fact that the narrative of national securitisation is far from inevitable, but is rather shaped by the great powers, and actors - including expert communities - who communicate to them, their publics and political and security establishments. Without national securitisation ‘the Project’ seems unlikely to happen, so Aschenbrenner seems to be actively agitating for the Project to actually happen. This means that Aschenbrenner, far from taking on a purely descriptive project, is helping dangerous scenarios come about. Indeed, there seems to be some (implicit) awareness of this in the piece - the reference class Aschenbrenner uses for himself and his peers is “Szilard and Oppenheimer and Teller”, men who are, at least to a degree, responsible for the ‘time of perils’ we are in today.

Aschenbrenner hugely fails to consider alternatives, and the consequences of a nationally securitized race. He fails to consider adequately the possibility of a moratorium and why it wouldn’t work, nor how it could esure long-term safety. He fails to consider how the risk of superintelligence breaking nuclear deterrence could increase the chances of military conflict if both sides nationally securitise their ‘Projects’. He also fails to see how the possibility of this happening might increase the chances of collaboration, if states don’t see the development of AGI as inevitable.

As a community, we need to stay focused on ensuring existential safety for all of humanity. Extreme hawkishness on national security has a very poor track record of increasing existential safety.

Aschenbrenner, L. (2024) SITUATIONAL AWARENESS: The decade ahead, SITUATIONAL AWARENESS - The Decade Ahead. SITUATIONAL AWARENESSS. Available at: https://situational-awareness.ai/ (Accessed: 12 July 2024).

Belfield, H. and Ruhl, C. (2022) Why policy makers should beware claims of new ‘arms races’, Bulletin of the Atomic Scientists. Available at: https://thebulletin.org/2022/07/why-policy-makers-should-beware-claims-of-new-arms-races/ (Accessed: 12 July 2024).

Buzan, B. and Wæver, O. (2009) ‘Macrosecuritisation and security constellations: reconsidering scale in securitisation theory’, Kokusaigaku revyu = Obirin review of international studies, 35(2), pp. 253–276.

Buzan, B., Wæver, O. and de Wilde, J. (1998) Security: A New Framework for Analysis. Lynne Rienner Publishers.

Corry, O. (2012) ‘Securitisation and “riskification”: Second-order security and the politics of climate change’, Millennium Journal of International Studies, 40(2), pp. 235–258.

Oreskes, N. and Conway, E.M. (2011) ‘Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming’, 128, pp. 355–435–436.

Sears, N.A. (2023) Great Power Rivalry and Macrosecuritization Failure: Why States Fail to ‘Securitize’ Existential Threats to Humanity. Edited by S. Bernstein. PhD. University of Toronto.

Statement on AI risk (2023) Center for AI Safety. Available at: https://www.safe.ai/work/statement-on-ai-risk (Accessed: 12 July 2024).

238

12
2
1
1
4

Reactions

12
2
1
1
4

More posts like this

Comments22
Sorted by Click to highlight new comments since:

I agree with much of Leopold's empirical claims, timelines, and analysis. I'm acting on it myself in my planning as something like a mainline scenario. 

Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:

* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer -- there's still alpha, you can still be early
* This geopolitical outcome is "inevitable" (sic!)
* in the future the coolest and most elite group will work on The Project. "see you in the desert" (sic)
* Etc.

Combined with a lot of retweets, with praise, on launch day, that were clearly coordinated behind the scenes; it gives me the feeling of being deliberately written to meme a narrative into existence via self-fulfilling prophecy; rather than inferring a forecast via analysis.

As a sidenote, this felt to me like an indication of how different the AI safety adjacent community is now to when I joined it about a decade ago. In the early days of this space, I expect a piece like this would have been something like "epistemically cancelled": fairly strongly decried as violating important norms around reasoning and cooperation. I actually expect that had someone written this publicly in 2016, they would've plausibly been uninvited as a speaker to any EAGs in 2017.

I don't particularly want to debate whether these epistemic boundaries were correct --- I'd just like to claim that, empirically, I think they de facto would have been enforced. Though, if others who have been around have a different impression of how this would've played out, I'd be curious to hear.

I agree with you about the bad argumentation tactics of Situational Awareness, but not about the object level. That is, I think Leopold's arguments are both bad, and false. I'd be interested in talking more about why they're false, and I'm also curious about why you think they're true.

I think some were false. For example, I don't get the stuff about mini-drones undermining nuclear deterrence, as size will constrain your batteries enough that you won't be able to do much of anything useful. Maybe I'm missing something (modulo nanotech). 

I think it's very plausible scaling holds up, it's plausible AGI becomes a natsec matter, it's plausible it will affect nuclear deterrence (via other means), for example.

What do you disagree with?

Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer -- there's still alpha, you can still be early
* This geopolitical outcome is "inevitable" (sic!)
* in the future the coolest and most elite group will work on The Project. "see you in the desert" (sic)
* Etc.

These are not just vibes - they are all empirical claims (except the last maybe). If you think they are wrong, you should say so and explain why. It's not epistemically poor to say these things if they're actually true.

Thanks for writing this, it's clearly valuable to advance a dialogue on these incredibly important issues. 

I feel an important shortcoming of this critique is that it frames the choice between national securitization vs. macrosecuritization in terms of a choice between narratives, without considering incentives. I think Leopold gives more consideration to alternatives than you give him credit for, but argues that macrosecuritization  is too unstable of an equilibrium:

Some hope for some sort of international treaty on safety. This seems fanciful to me. The world where both the CCP and USG are AGI-pilled enough to take safety risk seriously is also the world in which both realize that international economic and military predominance is at stake, that being months behind on AGI could mean being permanently left behind. If the race is tight, any arms control equilibrium, at least in the early phase around superintelligence, seems extremely unstable. In short, ”breakout” is too easy: the incentive (and the fear that others will act on this incentive) to race ahead with an intelligence explosion, to reach superintelligence and the decisive advantage, too great.

I also think you underplay the extent to which Leopold's focus on national security is instrumental to his goal of safeguarding humanity's future. You write: "It is true that Aschenbrenner doesn’t always see himself as purely protecting America, but the free world as a whole, and probably by his own views, this means he is protecting the whole world. He isn’t, seemingly, motivated by pure nationalism, but rather a belief that American values must ‘win’ the future." (emphasis mine.)

First, I think you're too quick to dismiss Leopold's views as you state them. But what's more, Leopold specifically disavows the specific framing you attribute to him:

To be clear, I don’t just worry about dictators getting superintelligence because “our values are better.” I believe in freedom and democracy, strongly, because I don’t know what the right values are [...] I hope, dearly, that we can instead rely on the wisdom of the Framers—letting radically different values flourish, and preserving the raucous plurality that has defined the American experiment.

Both of these claims -- that international cooperation or a pause is an unstable equilibrium, and that the West maintaining an AI lead is more likely to lead to a future with free expression and political experimentation -- are empirical. Maybe you'd disagree with them, but then I think you need to argue that this model is wrong, not that he's just chosen the wrong narrative.

Thanks for this reply Stephen, and sorry for my late reply, I was away.

I think its true that Aschenbrenner gives (marginally) more consideration than I gave him credit for - not actually sure how I missed that paragraph to be honest! Even then, whilst there is some merit to that argument, I think he needs to much better justify his dismissal of an international treaty (along similar lines to your shortform piece). As I argue in the essay, I think that such lack of stability requires a particular reading of how states acts - for example, I argue if we buy a form of defensive realism, states may in fact be more inclined to reach a stable equilibrium/. Moreover, as I argue, I think Aschenbrenner fails to acknowledge how his ideas on this may well become a self-fulfilling prophecy.

I actually think I just disagree with your characterisation of my second point, although it could well be a flaw in my communication, and if so I apologise. My argument isn't even that values of freedom and democracy, or even a narrower form of 'American values' wouldn't be better for the future (see below for more discussion on that), its that national securitisation has a bad track record at promoting collaboration and dealing with extreme risk and we have good reason to think it may be bad in the case of AI. So even if Aschenbrenner doesn't frame it as national securitisation for the sake of nationalism, but rather national securitisation for the sake of all humanity, the impacts will be the same. The point of that paragraph was simply to preempt a critique that is exactly what you say. I also think its clear that Aschenbrenner in his piece is happy to conflate those values with 'American nationalism/dominance' (eg 'America must win'), so I'm not sure him making this distinction actually matters.

I also probably am much less bearish on American dominance than Aschenbrenner is. I'm not sure the American national security establishment actually has a good track record of preserving a 'raucous plurality', and if (as Aschenbrenner wants) we expect superintelligence to be developed through that institution, I'm not overly confident in how good it will be. Whilst I am no friend of dictatorships, I'm also unconvinced that if one cares about raucous pluralism that US dominance, or certainly to the extent that Aschenbrenner envisions, is necessarily a good thing. Moreover, even in American democracy, the vast majority of moral patients aren't represented at all. I'm essentially unconvinced that the benefits of America 'winning' a nationally securitised AI race anywhere near oughtweigh the geopolitical risk, misalignment risk, and most importantly the risk of not taking our time to construct a mutually beneficial future for all sentient beings. I think I have put this paragraph quite crudely, and would be happy to elaborate further, although it isn't actually central to my argument.

I think its wrong to say that my argument doesn't work without significant argument against those two premises. Firstly, my argument was that Aschenbrenner was 'dangerous', which required highlighting why the narrative choice was problematic. Secondly, yes, there is more to do on those points, but given Aschenbrenner's failure to give in depth argumentation on those points, I thought that they would be better to deal with as their own pieces (which I may or may not right). In my view, the most important aspect of the piece was Aschenbrenner's claim that national securitisation is necessary to secure the safest outcomes, and I do feel the piece was broadly successful at arguing that this is a dangerous narrative to propogate. I do think if you hold Aschenbrenner's assumptions strongly, namely cooperation is very difficult, alignment is easy-ish and the most important thing is for an American AI lead as this leads to a maximally good future by maximising free expression and political expression, then my argument is not convincing. I do, however, think this model is based on some rather controversial assumptions, and given the dangers involved, woefully insufficiently justified by Aschenbrenner in his essay.

One final point is that it is still entirely non-obvious, as I mention in the essay, that national securitisation is the best frame even if a pause is impossible, or even weaker, if it is an unstable equilibrium.

Thanks for this, really helpful! For what it's worth, I also think Leopold is far too dismissive of international cooperation.

You've written there that "my argument was that Aschenbrenner was 'dangerous'". I definitely agree that securitisation (and technology competition) often raises risks.[1] I think we have to argue further, though, that securitisation is more dangerous on net than the alternative: a pursuit of international cooperation that may, or may not, be unstable. That, too, may raise some risks, e.g. proliferation and stable authoritarianism.

  1. ^

    Anyone interested can read far more than they probably want to here.

I do think we have to argue that national securitisation is more dangerous than humanity securitisation, or non-securitised alternatives. I think its important to note that whilst I explicitly discuss humanity macrosecuritisation, there are other alternatives as well that Aschenbrenner's national securitisation compromises, as I briefly argue in the piece.

Of course, I have not and was not intending to provide an entire and complete argument for this (it is only 6,000 words) , although I think I do go further to proving this than you give me credit for here. As I summarise in the piece, the Sears (2023) thesis provides a convincing argument from empirical examples that national securitisation (and a failure of humanity macrosecuritisation) is the most common factor in the failure of Great Powers to adequately combat existential threats (eg the failure of the Baruch Plan/international control of nuclear energy, the promotion of technology competition around AI vs arms agreements with the threat of nuclear winter, BWC, montreal protocol). Given this limited but still significant data that I draw on, I do think it is unfair to suggest that I haven't provided an argument that national securitisation isn't more dangerous on net. Moreover, as I address in the piece, Aschenbrenner fails to provide any convincing track record of success of national securitisation, whilst his use of historical analogies (Szilard, Oppenheimer and Teller), all indicate he is pursuing a course of action that probably isn't safe. Whilst of course I didn't go through every argument, I think Section 1 provides arguments that national securitisation isn't inevitable, Section 2 provides the argument that, at least from historical case studies, humanity macrosecuritisation is safer than national securitisation. The other sections show why I think Aschenbrenner's argument is dangerous rather than just wrong, and how he ignores important other factors.

The core of Aschenbrenner's argument is that national securitisation is desirable and thus we ought to promote and embrace it ('see you in the desert'). Yet he fails to engage with the generally poor track record of national securitisation at promoting existential safety, or fails to provide a legitimate counter-argument. He also, as we both acknowledge, fails to adequate deal with possibilities for international collaboration. His argument for why we need national securitisation seems to be premised on three main ideas: it is inevitable (/there are no alternatives), the values of the USA 'winning' the future is our most important concern (whilst alignment is important, I do think it is secondary to Aschenbrenner to this), the US natsec establishment is the way to ensure that we get a maximally good future. I think Aschenbrenner is wrong on the first point (and certainly, fails to adequeatly justify it). On the second point, he overestimates the importance of the US winning compared to the difficulty of alignment, and certainly, I think his argument for this fails to deal with many of the thorny questions here (what about non-humans? how does this freedom remain in a world of AGI etc?). On the third point, I think he goes some way to justify why the US natsec establishment would be more likely to 'win' a race, but fails to show why such a race would be safe (particularly given its track record). He also fails to argue that natsec would allow for the values we care about to be preserved (US natsec doesn't have the best track record with reference to freedom, human rights etc).

On the point around the instability of international agreements. I do think this is the strongest argument against my model of humanity macrosecuritisation leading to a regime that stops the development of AGI. However, as I allude to in the essay, this isn't the only alternative to national securitisation. Since publishing the piece this is the biggest mistake in reasoning (and I'm happy to call it that) that I see people making. The chain of logic that goes 'humanity macrosecuritisation leading to an agreement would be unstable therefore promoting national securitisation is the best course of action' is flawed; one needs to show that the plethora of other alternatives (depolitical/political/riskified decisionmaking, or humanity macrosecuritisation but without an agreement) are not viable - Aschenbrenner doesn't address this at all. I also, as I think you do, see Aschenbrenner's argument against an agreement as containing very little substance - I don't mean to say its obviously wrong, but he hardly even argues for it.

I do think stronger arguments for the need to nationally securitise AI could be provided, and I also think they are probably wrong. Similarly, I think stronger arguments than mine can be provided with regards to why we need to humanity macrosecuritise superintelligence and how international collaboration on controlling AI development (I am working on something like this) that can address some of the concerns that one may have. But the point of this piece is to engage with the narratives and arguments in Aschenbrenner's piece. I think he fails to justify national securitisation whilst also taking action that endangers us (and I'm hearing from people connected to US politics that the impact of this piece may actually be worse than I feared).

On the stable totalitarianism point, I also think its useful to note that it is not at all obvious that the risk of stable totalitarianism is more under some form of global collaboration than it is under a nationally securitised race.

National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty.

I found this distinction really helpful. 

It reminds me of Holden Karnofsky's piece on How to make the best of the most important century (2021), in which he presents two contrasting frames: 

  • The "Caution" frame. In this frame, many of the worst outcomes come from developing something like PASTA in a way that is too fast, rushed, or reckless. We may need to achieve (possibly global) coordination in order to mitigate pressures to race, and take appropriate care. (Caution)
  • The "Competition" frame. This frame focuses not on how and when PASTA is developed, but who (which governments, which companies, etc.) is first in line to benefit from the resulting productivity explosion. (Competition)
  • People who take the "caution" frame and people who take the "competition" frame often favor very different, even contradictory actions. Actions that look important to people in one frame often look actively harmful to people in the other.
    • I worry that the "competition" frame will be overrated by default, and discuss why below. (More)

Excellent work.

To summarize one central argument in briefest form:
 

Aschenbrenner's conclusion in Situational Awareness is wrong in overstating the claim.

He claims that treating AGI as a national security issue is the obvious and inevitable conclusion for those that understand the enormous potential of AGI development in the next few years. But Aschenbrenner doesn't adequately consider the possibility of treating AGI primarily as a threat to humanity instead of a threat the nation or to a political ideal (the free world). If we considered it primarily a threat to humanity, we might be able to cooperate with China and other actors to safeguard humanity.

I think this argument is straightforwardly true. Aschenbrenner does not adequately consider alternative strategies, and thus his claim of the conclusion being the inevitable consensus is false.

But the opposite isn't an inevitable conclusion, either.

I currently think Aschenbrenner is more likely correct about the best course of action. But I am highly uncertain. I have thought hard about this issue for many hours both before and after Aschenbrenner's piece sparked some public discussion. But my analysis, and the public debate thus far, are very far from conclusive on this complex issue.

This question deserves much more thought. It has a strong claim to being the second most pressing issue in the world at this moment, just behind technical AGI alignment.

This piece draws on the work of Nathan A. Sears (2023), who argues that the failure to sufficiently eliminate plausible existential threats throughout the 20th century emerges from a ‘national securitisation’ narrative winning out over a ‘humanity macrosecuritization narrative’.

Love to see Nathan's work continue to provide value. Such a tragic loss.

(For folks unaware, Nathan died not long after completing this thesis. A list of his other relevant scholarly contributions can be found here)

Thank you for making the effort to write this post. 

Reading Situational Awareness, I updated pretty hardcore into national security as the probable most successful future path, and now find myself a little chastened by your piece, haha [and just went around looking at other responses too, but yours was first and I think it's the most lit/evidence-based]. I think I bought into the "Other" argument for China and authoritarianism, and the ideal scenario of being ahead in a short timeline world so that you don't have to even concern yourself with difficult coordination, or even war, if it happens fast enough. 

I appreciated learning about macrosecuritization and Sears' thesis, if I'm a good scholar I should also look into Sears' historical case studies of national securitization being inferior to macrosecuritization. 

Other notes for me from your article included: Leopold's pretty bad handwaviness around pausing as simply, "not the way", his unwillingness to engage with alternative paths, the danger (and his benefit) of his narrative dominating, and national security actually being more at risk in the scenario where someone is threatening to escape mutually assured destruction. I appreciated the note that safety researchers were pushed out of/disincentivized in the Manhattan Project early and later disempowered further, and that a national security program would probably perpetuate itself even with a lead.

 

FWIW I think Leopold also comes to the table with a different background and set of assumptions, and I'm confused about this but charitably: I think he does genuinely believe China is the bigger threat versus the intelligence explosion, I don't think he intentionally frames the Other as China to diminish macrosecuritization in the face of AI risk. See next note for more, but yes, again, I agree his piece doesn't have good epistemics when it comes to exploring alternatives, like a pause, and he seems to be doing his darnedest narratively to say the path he describes is The Way (even capitalizing words like this), but...

One additional aspect of Leopold's beliefs that I don't believe is present in your current version of this piece, is that Leopold makes a pretty explicit claim that alignment is solvable and furthermore believes that it could be solved in a matter of months, from p. 101 of Situational Awareness:

Moreover, even if the US squeaks out ahead in the end, the difference between a 1-2 year and 1-2 month lead will really matter for navigating the perils of superintelligence. A 1-2 year lead means at least a reasonable margin to get safety right, and to navigate the extremely volatile period around the intelligence explosion and post-superintelligence.77 [NOTE] 77 E.g., space to take an extra 6 months during the intelligence explosion for alignment research to make sure superintelligence doesn’t go awry, time to stabilize the situation after the invention of some novel WMDs by directing these systems to focus on defensive applications, or simply time for human decision-makers to make the right decisions given an extraordinarily rapid pace of technological change with the advent of superintelligence.

I think this is genuinely a crux he has with the 'doomers', and to a lesser extent the AI safety community in general. He seems highly confident that AI risk is solvable (and will benefit from gov coordination), contingent on there being enough of a lead (which requires us to go faster to produce that lead) and good security (again, increase the lead).

Finally, I'm sympathetic to Leopold writing about the government as better than corporations to be in charge here (and I think the current rate of AI scaling makes this at some point likely (hit proto-natsec level capability before x-risk capability, maybe this plays out on the model gen release schedule)) and his emphasis on security itself seems pretty robustly good (I can thank him for introducing me to the idea of North Korea walking away with AGI weights). Also just the writing is pretty excellent.

Executive summary: Aschenbrenner's 'Situational Awareness' promotes a dangerous national securitization narrative around AI that is likely to undermine safety efforts and increase existential risks to humanity.

Key points:

  1. National securitization narratives historically lead to failure in addressing existential threats to humanity, while "humanity macrosecuritization" approaches are more successful.
  2. Aschenbrenner aggressively frames AI as a US national security issue rather than a threat to all of humanity, which is likely to increase risks.
  3. Expert communities like AI safety researchers can significantly influence securitization narratives and should oppose dangerous national securitization framing.
  4. Aschenbrenner fails to adequately consider alternatives like an AI development moratorium or international collaboration.
  5. National securitization of AI development increases risks of military conflict, including potential nuclear war.
  6. A "humanity macrosecuritization" approach focused on existential safety for all of humanity is needed instead of hawkish national security framing.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

BTW, this link (Buzan, Wæver and de Wilde, 1998) goes to a PaperPile citation that's not publicly accessible. 

Nice post, this is a useful critique I think!

What are the main things you agree with Leopold on? Maybe:

  • 'The Project' is a significant possibility.
  • Improving cybersecurity is very important.
  • Neutral pro-humanity observers should prefer the US/ the 'Free World' to have relatively greater power than China/other authoritarian countries, all else equal.

I think these are important points that I agree with Leopold on. But I agree with you (and your piece moved me a bit in this direction) that national securitization is risky.

On these three points:

  • Yes, the Project is a significant possibility. People like Aschenbrenner make this more likely to happen, and we should be trying to oppose it as much as possible. Certainly, there is a major 'missing mood' in Aschenbrenner's piece (and the interview), where he seems to greet the possibility of the Project with glee.
  • I'm actually pretty unsure whether improving cybersecurity is very important. The benefits are well known. However, if you don't improve cybersecurity (or can't), then advancing AI becomes much more dangerous withg much less upside, so racing becomes harder. With worse cybersecurity, a pause may be more likely. Basically, I'm unsure and I don't think its as simple as most people think. Its also not obvious to me that, for example, America directly sharing model weights with China wouldn't be a positive thing.
  • Certainly according to my ethics I am not 'neutral pro-humanity', but rather care about a flourishing and just future for all sentient beings. On this axis, I do think the difference is more marginal than many would expect. I would probably guess that US/the free world would be better to have relatively greater power, although with some caveats (eg I'm not sure I trust the CIA very much to have a large amount of control). I think both groups 'as-is', particularly in a nationally securitised 'race' are rather far from the optimal, and this difference is very morally significant. So I think I'm definitiely MUCH more concerned than Aschenbrenner is about avoiding a nationally securitised race (also because I'm more concerned with misalignment than I think he is).

Great points, I hadn't thought about the indirect benefits of poor cybersecurity before, interesting!

And yes, your point about considering non-humans is well-taken and I agree. I suppose even on that my guess is liberalism is more on track to a pro-animal future than authoritarianism, even if both are very far from it (but hard to tell).

I suggest editing in a link to the LessWrong linkpost

"With Islamic terrorism, these involved mass surveillance and detention without trial."

I think Islamist terrorism would be more accurate and less inflammatory. 

Thanks for writing this Gideon. 

I think the risks around securitisation are real/underappreicated, so I'm grateful you've written about them. As I've written about, I think the securitisation of the internet after 9/11 impeded proper privacy regulation in the US, and prompted Google towards an explicitly pro-profit business model. (Although, this was not a case of macrosecuritization failure). 

Some smaller points: 

Secondly, its clear that epistemic expert communities, which the AI Safety community could clearly be considered

This is argued for at greater length here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641526

but ultimately the social construction of the issue as one of security is what is decisive, and this is done by the securitising speech act.

I feel like this point was not fully justified. It seems likely to me that whilst rhetoric around AGI could contribute to securitisation, other military/economic incentives could be as (or more) influential.

 What do you think?

Curated and popular this week
Relevant opportunities