Hide table of contents

Recently, I have heard an increasing number of prominent voices mention the risk of AI helping to create catastrophic bioweapons. This has been mentioned in the context of AI safety, and not in the context of biosecurity. Thus, it seems anecdotally that people see a significant portion of AI risk being that of an AI somehow being instrumental in causing a civilization-threatening pandemic. That said, I have failed to find any even precursory exploration of how much of AI risk is this risk of AI being instrumental in creating catastrophic bioweapons. Does anyone know of any attempts to quantify the "overlap" between AI and bio? Or could someone please try to do so?

One reason to quantify this overlap is if bio+AI is used as a primary or at least prominent example to the public, it seems useful to have some analysis underpinning such statements. Even showing that bio+AI is actually just a small portion of AI risk might be helpful so that when using this example, the person using this example can also mention that this is just one of many ways AI could end up contributing to harming us. Or if it is indeed a large portion of AI risk, state this with a bit more clarity.

Other reasons to have such analysis might be:

  1. Assisting grantmakers allocate funds. For example, if there is a large overlap, it might be that grantmakers currently investing in AI safety also might want to donate to biosecurity interventions likely to help in an "AI assisted pandemic"
  2. Help talent decide what problem to work on. It might, for example be that policy experts worried about AI Safety also want to focus on legislation and policy around biosecurity
  3. Perhaps foster more cooperation between AI and biosecurity professionals
  4. Perhaps a quantification here could help both AI safety experts and biosec professionals know what type of scenarios to prepare for?  For example, it could put more emphasis in AI safety work on trying to prevent AI from becoming too capable in biology (e.g. by removing such training material).
  5. Probably other reasons I have not had time to think about

Relatedly, and I would be very careful in drawing conclusions from this, I just went through the Metaculus predictions for the Ragnarök question series and found that these add up to 132%. Perhaps this indicate overlaps between the categories, or perhaps it is just an effect of the different forecasters for the different questions (there seems to be large variation in how many people have forecasted on each question). Let us assume for the sake of the argument that the "extra" 32% very roughly represents overlap between the different categories. Then, with very little understanding of the topic, I might guess that perhaps half of the biorisk of 27% would also resolve as AI caused catastrophe, so roughly 15%. This means 15% of the bio risk is the same as 15% of the AI risk which would reduce the 32% excess (132%-100%) to about 32%-15%=~15%. Perhaps the remaining 15% are overlaps between AI and nuclear, and possibly other categories. However, this would mean almost half the AI risk is biorisk. This seems suspiciously high but at least it could explain why so many prominent voices uses the example of AI + bio when talking about how AI can go wrong. Moreover, if all these extra 32% are indeed overlaps with AI, it means there is almost no "pure" AI catastrophe risk which seems suspicious. Anyways, these are the only numbers I have come across that at least points towards some kind of overlap between AI and bio.

Thanks for any pointers or thoughts on this!

23

1
0

Reactions

1
0
New Answer
New Comment

3 Answers sorted by

I strongly agree with this, contra titotal. To explain why, I'll note that there are several disjunctive places that this risk plays out. 

First, misuses of near human AGI systems or narrow AI could be used by sophisticated actors to enhance their ability to create bioweapons. This might increase that risk significantly, but there are few such actors, and lots of security safeguards. Bio is hard, and near-human-level AI isn't a magic bullet for making it easy. Narrow AI that accelerates the ability to create bioweapons also accelerates a lot of defensive technologies, and it seems very, very implausible that something an order of magnitude worse than natural diseases would be found. That's not low risk, but it's not anything like half the total risk.

Second, misuse or misalignment of human level AI systems creating Bostromian speed superintelligences or collective superintelligences creates huge risks, but these aren't specific to biological catastrophes, and they don't seem dominant, humanity is vulnerable is so many ways that patching one route seems irrelevant. And third, this is true to a far greater extent for misaligned ASI.

I'm interested in what other paths of attack you think could be more successful than deploying bioweapons (and attacking the survivors). 

Or are you saying that only a massively scaled up superintelligence could pull off extinction, and that if such a thing is impossible, then so is near-term AI x-risk? 

4
Davidmanheim
In the near term, misuse via bio doesn't pose existential risks, because synthetic bio is fundamentally harder than people seem to assume. Making a bioweapon is very hard, making one significantly worse than what previous natural diseases and bioweapons were capable of is even harder, and the critical path isn't addressed with most of the capabilities that narrow AI I expect is possible before AGI could plausibly do. After that, I think that the risk from powerful systems is disjunctive, and any of a large number of different things could allow a malign actor to take over given effectively unlimited resources that a collective or speed superintelligence enabled by relatively cheap AGI would be able to amass. I don't know exactly how scaled up it needs to be to pose that risk, and perhaps it's far away, but if we're facing a misaligned ASI that wants to kill us, the specific method isn't really a limiting factor.

My current view is that the near and medium term overlap between AI risk and biorisk is nearly 100%. Bioweapons and biotechnology seem like the only path for an AI to extinct humanity that has any decent chance of working in the short or medium term. 

I recently did a deep dive into molecular nanotech (one alternative method that has been proposed), and I think the technology is definitely at least 60 years away, possibly a century or more away, possibly not even possible.  Even with the speedups in research from AGI, I think our enemy would be foolish to pursue this path, instead of working on bioweapons, a technology that already exists and has been proven to be devastating in effect. (note that I do not believe in intelligence explosions or other "godlike AI" hypotheses). 

As someone who does lots of biorisk work, I disagree that this is the only likely catastrophic risk, as I note in my response, but event more strongly disagree that this is actually a direct extinction risk - designed diseases that kill everyone in the face of humans actually trying to stop them aren't obviously possible, much less findable by near-human or human level AI systems. 

Of course, combined with systemic fragility, intentional disinformation, and other attack modes enabled by AI, it seems plausible that a determined adversary with tremendous... (read more)

Oh wow, that is quite a drastic overlap! Do you by any chance know of any writing on the topic that has convinced you, e.g. why nuclear+AI is not something to worry about?

4
titotal
I would be open to persuasion on nuclear risk, but it seems like a difficult plan to me. There are only a few nations with sufficient arsenals to trigger a nuclear exchange, and they all require human beings to launch the nukes. I would be interested if someone could make the case for AI+nuclear, though. 
4
Benevolent_Rain
I am no expert in this but can think of an AI directly convincing people with launch access. Or deepfakes pretending to be their commander and probably many other scenarios. What if new nukes being built are using AI in hardware design and AI sneaks in some cyber backdoor with which to get control of the nuke?
5
titotal
There are meant to be a lot of procedures in place to ensure that an order to launch nukes is genuine, and to ensure that a nuke can't be launched without the direct cooperation of the head of state and the military establishment. Convincing one person wouldn't be enough, unless that one person was the president, and even then, the order may be disobeyed if it comes off as insane. As for the last part, if you get "control of a nuke" while it's sitting in a bunker somewhere, all you can do is blow up the bunker, which doesn't do anything.  The most likely scenario seems to be some sort of stanislav petrov scenario, where you falsely convince people that a first strike is occurring and that they need to respond immediately. Or that there are massive security holes somewhere that can be overcome. 

I think for this purpose we should distinguish between AI misuse and misalignment (with the caveat that this is not a perfect distinction and there might be many more issues, etc.)

AI misuse:

A group of people wants to kill all of humanity (or have other hugely catastrophic goals), and they use AI tools.

I think this is definitely a concern for Bio risk, and some of the suggestions you make sound very reasonable. (Also, one should be careful how to talk about these possibilities, in order not to give anyone ideas, eg make sure that certain aspects of your research into AI assisted pandemics don't go viral). So there is an overlap, but one could also argue that this type of scenario belongs into the field of Bio risk. Of course, collaboration with AI policy people could be hugely important (eg to make certain kinds of open source AI illegal without drawing too much attention to it).

In my personal guess, misalignment is a bigger part of catastrophic risk, and also receives a lot more attention within AI safety work.

AI misalignment:

An AI that is smarter than humans has goals that conflict with those of humans. Humans go extinct sooner or later, because the AI wants to take control or because it is mostly indifferent about human existence (but not because some programmed-in hate for humans).

Here, Bio could be a concrete attack vector. But there probably many attack vectors, some of which require more or less capable AIs to pull of. My guess here is that Bio is at most a small part here, and focusing on concrete attack vectors might not be that valuable by comparison (although not be of zero value).

So, I would say that these types of misalignment worries should be considered "pure" AI catastrophic risks.

Against focusing too much on concrete attack vectors:

Building an AI that is smarter than all of humanity combined and wants to exterminate humanity is just a bad idea. If we try to predict how exactly the AI will kill us, we will probably be wrong because the AI is smarter than us. If we try to enumerate possible attack vectors one by one and defend against them, this might slow down the AI by some years (or weeks), but the AI can probably come up with attack vectors we haven't thought of, or invent new technologies. Also, the AI might need multiple attack vectors and technologies, eg to ensure its own energy supply. If that kind of AI has acquired enough resources/compute, we will probably lose against that AI, and defending against concrete scenarios might have some benefits, but it seems preferable to focus on not building that kind of AI in the first place.

An analogy I find helpful: If you play chess against Magnus Carlsen, then you don't know in advance which concrete moves he will play, but you know that you will lose.

Comments1
Sorted by Click to highlight new comments since:

Just an observation from the few, excellent and disparate answers so far: It seems the answer to how much of AI risk is the overlap with bio runs the gamut from almost nothing to almost everything. I guess in line with previous thinking on estimates around which there is uncertainty, this to me points at a few things:

  • It increases the value of trying to analyse this in more detail. Especially as I pointed out in the OP that it could have big, action-relevant implications.
  • We can not write off any significant degree of overlap. With a risk mindset it means we should all be concerned this overlap could be large, even if we personally think the overlap is small.
  • Similarly it also means we cannot expect bio defense to do all the job of AI safety, even if one personally believes the overlap is near complete.
  • For people using AI+bio as an example of ways AI could go wrong, for now it seems fair to continue to use this example perhaps and when appropriate adding that it is unclear how much of AI risk is AI+bio - it could be a lot.
Curated and popular this week
Relevant opportunities