TLDR: There is a growing concern about unique risks posed by the convergence of artificial intelligence and biosecurity. This post makes an attempt at surveying the current state of research in this space and suggests a comprehensive agenda for research that encompasses different aspects of this convergence. The intention is to allow those who work in relevant spaces to have some sense of the important questions to work on in this space, analogous to posts that outline important priorities in the AI and biosecurity spaces. The intention is also to allow for more discussion on what such an agenda should be. This agenda is intended to be tentative and comments on it are incredibly welcome so it can be iterated.
Note on information hazards
The main hesitance in publishing this post was the potential of information hazards. However, upon some discussion and reflection, the decision was made to publish this post. For one, as the post discusses, there is a lot of research out on this issue already, and this post does not add to that information and where possible summarizes it without raising such hazards. This post instead is more concerned with asking questions about how to fashion a research agenda to reduce risks in this space. All research discussed here intenationally is public access and reasonably well-covered. I would welcome comments on this as well, as I know there are some concerns about touching this space with reference to information hazards.
The Current State of Research
The work on examining risks which intersect with each other is, in a sense, a decades-old academic and policy project. In the current landscape, multiple think tanks have commissioned research on ‘converging risks’, a well-known example of which is the Converging Risks Lab at the Council on Strategic Risks, which focuses on risk-convergence in the climate, nuclear and bio spaces.
There is also a growing amount of work directly on AI-bio convergence. In a fantastic paper from three years ago titled “Assessing the Risks Posed by the Convergence of Artificial Intelligence and Biotechnology”, John T O'Brien and Cassidy Nelson evaluate the utility of different risks assessment frameworks in analyzing the risk landscape emerging at the convergence of AI and biosecurity. Defining convergence in this paper as “the technological commingling between the life sciences and AI such that the power of their interaction is greater than the sum of their individual disciplines”, the paper surveys various potential risks posed by this intersection, such as AI-assisted identification of virulence factors to in silico design of novel pathogens. These developments could increase the risk of deliberate or accidental high-consequence biological events.
In a presentation two years ago titled “Cyber-AI-Bio Convergence”, Eleonore Pauwels, Director of the Anticipatory Intelligence Lab at the Wilson Center, primarily focuses on the applications of deep learning in genomics and cyber-vulnerabilities to repositories of high-risk biological data. The latter concern has occupied much of the research in this space, including a paper from Richardson et al on ‘cyberbiosecurity’ which evaluates cyber threats to bio-infrastructure, which builds on the work of Murch et al a few years previous.
Last year in October, in the run-up to the BWC Review Conference, US Representative Anna G. Eshoo issued a letter to the National Security Advisor and the OSTP, focusing on “dual-use harm that wholly open-sourced artificial intelligence (AI) models can have with regard to biosecurity. The open-source nature of dual-use AI models coupled with the declining cost and skills required to synthesize DNA and the current lack of mandatory gene synthesis screening requirements for DNA orders significantly increase the likelihood of the misuse of such models. I urge the Administration to include the governance of dual-use, open-source AI models in its upcoming discussions with our co-signatories at the Ninth Review Conference of the Biological Weapons Convention (BWC) and to investigate methods of governance such as mandating the use of application programming interfaces (APIs).”
Multiple articles in the Bulletin of Atomic Scientists in the last few months have pointed to the ways in which developments in artificial intelligence, may be accelerating biological risks. Mention of converging risks was also made in a recent UNDRR report on existential risk and rapid technological change.
In conversations with colleagues and friends in this community, I am also aware of multiple research foundations and institutes which have tentative plans to commission research on AI-bio convergence.
Possible Directions for Research
Different kinds of convergence
I find it helpful to distinguish between two categories of convergence risks:
- Technological convergence
- Political convergence
Technological convergence can be defined in the same way that JT and Cassidy define convergence generally, as encompassing the interactions between technological developments within biosecurity and AI which create unique benefits and risks of their own. In fact, almost every research paper, article and political statement I have been able to find on the issue is concerned with technological convergence.
Political convergence, as I see it, can be defined as encompassing situations where developments in biotechnology and artificial intelligence change the political environment such that one has indirect effects on accentuating risks from the other. Here are some possible examples of political convergence:
- The development of artificial intelligence systems which make it easier to craft disinformation and deepfakes may lead to increased misperceptions on the international stage, reduce the possibility of successful attribution, and hence increase risks from deliberate biological attacks.
- Misinformation driven by AI systems leads to a breakdown of desired adherence to public health guidance, increasing vulnerabiltiy to greater damage from natural, accidental or deliberate biological events.
- Increased competition between the US and China on AI development pushes other states into further irrelevance on the international stage, incentivizing adversarial states and non-state actors to develop and leverage biological (or indeed nuclear) arsenals as a means to retain power.
This convergence could apply to situations which only involve conventional and nuclear weapons. For instance, one can imagine as developments in artificial intelligence are perceived with sufficiently high stakes by relevant actors, one actor may launch an attack on another to push back AI development, as relevant actors find themselves in an all-or-nothing race dynamic. (There are multiple posts in the Forum that discuss the possibility of such a race dynamic).
While the political aspects of convergence are much ‘fuzzier’ then the technological aspects of convergence, they may be vital in drawing up an overall risk profile of convergence of AI and biosecurity, as well as be a factor in drafting recommendations on developments in biotechnology and artificial intelligence.
Relevant Questions
While I am hoping to expand on each question and sub-question on this list in future posts, I think it is helpful to pitch some important questions for further inquiry into this space, as I think they set up a good initial ground for discussion. These questions can then be expanded, iterated, and even completely reformulated after feedback and further reflection. Currently, I find the following questions to be of especially high importance:
- Requisite Training: Who is best placed to work on this issue? Is there a comparative advantage between those who work on biosecurity and artificial intelligence respectively? How transferable are the skills from the domain of AI policy to biosecurity policy?
- Estimating Development Pace: How do the various estimates of the ‘takeoff speed’ of advanced AI affect the biosecurity landscape? How much of the development in biotechnology is likely to be driven by developments in artificial intelligence?
- Risk Assessment: Building on the work of JT and Cassidy (as well as Koblentz and others), how can one craft analytically helpful risk assessments of AI and bio convergence? Would it be helpful to have separate frameworks for technological and political convergence?
- Strategy: How does the intersection of biotechnology and artificial intelligence affect strategy dynamics between leading biotech and AI firms? On the international stage, how do these intersections affect the security dynamics between states?
- Governance: How do AI and bio convergence affect the governance agenda for each of these domains? Is it possible to craft holistic recommendations which improve governance of both of these domains holistically?
- Convergence Defense: Just as there is significant research dedicated to creating AI and biosystems which provide effective means of defense against misuse of these domains, how can one do so in the context of convergence? What are the most promising developments in AI which would imply positive effects on biotechnology development? To what extent are each of these applications and interactions dual-use, and what is the risk-benefit balance of each?
Parting Thoughts
This research agenda is best seen as the rough thoughts of someone interested in biosecurity and AI security policy. It is very likely that it will seem very different after further discussions with members of these communities, and upon further reflection and experience of my own in both of these domains. I hope it is helpful as a starting point for those also interested in these domains. The aim is to publish a much mroe comprehensive version of this agenda next month after feedback. Thank you in advance for your comments!

This is helpful, thanks! A few thoughts on nomenclature:
What do you think?
Hi Ben, that’s very helpful! It is hard finding a category scheme that captures the distinction perfectly. One scheme Ive been debating since I read your comment is “convergence by technology” and “convergence by risk environment”. It has its issues, but in my mind it gets closest to what I am trying to capture in the term(s). Very interested in more suggestions/further thoughts!