Hide table of contents

There are two main areas of catastrophic or existential risk which have recently received significant attention; biorisk, from natural sources, biological accidents, and biological weapons, and artificial intelligence, from detrimental societal impacts of systems, incautious or intentional misuse of highly capable systems, and direct risks from agentic AGI/ASI. These have been compared extensively in research, and have even directly inspired policies. Comparisons are often useful, but in this case, I think the disanalogies are much more compelling than the analogies. Below, I lay these out piecewise, attempting to keep the pairs of paragraphs describing first biorisk, then AI risk, parallel to each other. 

While I think the disanalogies are compelling, comparison can still be useful as an analytic tool - while keeping in mind that the ability to directly learn lessons from biorisk to apply to AI is limited by the vast array of other disanalogies. (Note that this is not discussing the interaction of these two risks, which is a critical but different topic.)

Comparing the Risk: Attack Surface

Pathogens, whether natural or artificial, have a fairly well-defined attack surface; the hosts’ bodies. Human bodies are pretty much static targets, are the subject of massive research effort, have undergone eons of adaptation to be more or less defensible, and our ability to fight pathogens is increasingly well understood.

Risks from artificial intelligence, on the other hand, have a near unlimited attack surface against humanity, not only including our deeply insecure but increasingly vital computer systems, but also our bodies, our social, justice, political, and governance systems, and our highly complex and interconnected but poorly understood infrastructure and economic systems. Few of these are understood to be robust, and the classes of failures are both manifold, and not adapted or constructed for their resilience to attack.

Comparing the Risk: Mitigation

Avenues to mitigate impacts of pandemics are well explored, and many partially effective systems are in place. Global health, in various ways, is funded with on the order of tens of trillions of dollars yearly, much of which has been at times directly refocused on fighting infectious disease pandemics. Accident risk with pathogens is a major area of focus, and while manifestly insufficient to stop all accidents, decades of effort have greatly reduced the rate of accidents in laboratories working with both clinical and research pathogens. Biological weapons are banned internationally, and breaches of the treaty are both well understood to be unacceptable norm violations, and limited to a few small and unsuccessful attempts in the past decades.

The risks and mitigation paths for AI, both societally and for misuse, are poorly understood and almost entirely theoretical. Recent efforts like the EU AI act have unclear impact. The ecosystem for managing the risks is growing quickly, but at present likely includes no more than a few thousand people, with optimistically a few tens of millions of dollars of annual funding, and has no standards or clarity about how to respond to different challenges. Accidental negative  impacts of current systems, both those poorly vetted or untested, and those which were developed with safety in mind, are more common than not, and the scale of the risk is almost certainly increasing far faster than the response efforts. There are no international laws banning risky or intentional misuse or development of dangerous AI systems, much less norms for caution or against abuse.

Comparing the Risk: Standards

A wide variety of mandatory standards exist for disease reporting, data collection, tracking, and response. The bodies which receive the reports, both at a national and international level, are well known. There are also clear standards for safely working with pathogen agents which are largely effective when followed properly, and weak requirements to follow those standards not only in cases where known dangerous agents are used, but even in cases where danger is speculative - though these are often ignored. While all could be more robust, improvements are on policymakers’ agendas, and in general, researchers agree with following risk-mitigation protocols because it is aligned with their personal safety.

In AI, it is unclear what should be reported, what data should be collected about incidents, and whether firms or users need to report even admittedly worrying incidents. There is no body in place to receive or handle reports. There are no standards in place for developing novel risky AI systems, and the potential safeguards in place are admitted to be insufficient for the types of systems the developers say they are actively trying to create. No requirement to follow these standards exists, and the norms are opposed to doing so. Policymakers are conflicted about whether to put any safeguards in place, and many researchers actively oppose attempts to do so, calling claimed dangers absurd or theoretical.

Conclusion

Attempts to build safety systems are critical, and different domains require different types of systems, different degrees of caution, and different conceptual models which are appropriate to the risks being mitigated. At the same time, disanalogies listed here aren’t in and of themselves reasons that similar strategies cannot sometimes be useful, once the limitations are understood. For that reason, disanalogies should be a reminder and a caution against analogizing, not a reason on its own to reject parallel approaches in the different domains.

22

2
1

Reactions

2
1

More posts like this

Comments4
Sorted by Click to highlight new comments since:

This makes sense to me, good writeup!

Thanks for drawing this line between biorisk and AI risk.

Somewhat related: I often draw parallels between threat models in cyber security and certain biosecurity questions such as DNA synthesis screening. After reading your write-up, these two seem much closer related than biorisk and AI risk and I'd say cyber security is often a helpful analogy for biosecurity in certain contexts. Sometimes biosecurity intersects directly with cyber security, that is when critical information is stored digitally (like DNA sequences of concern). Would be interested in your opinion.

I think there are useful analogies between specific aspects of bio, cyber, and AI risks, and it's certainly the case that when the biorisk is based on information security, it's very similar to cybersecurity, not the least in that it requires cybersecurity! And the same is true for AI risk; to the extent that there is a risk of model weights leaking, this is in part a cybersecurity issue.

So yes, I certainly agree that many of the dissimilarities with AI are not present if analogizing to cyber. However, more generally, I'm not sure cybersecurity is a good analogy for biorisk, and have heard that computer security people often dislike the comparison of computer viruses and biological viruses for that reason, though they certainly share some features.

Executive summary: Despite frequent comparisons between biorisk and AI risk, the disanalogies between these two areas of catastrophic or existential risk are much more compelling than the analogies.

Key points:

  1. Pathogens have a well-defined attack surface (human bodies), while AI risks have a nearly unlimited attack surface, including computer systems, infrastructure, and social and economic systems.
  2. Mitigation efforts for pandemics are well-funded and established, with international treaties and norms, while AI risk mitigation is poorly understood, underfunded, and lacks clear standards or laws.
  3. Disease reporting and data collection standards exist for biorisk, along with protocols for safely working with pathogens, while AI lacks reporting standards, a central body to handle reports, or requirements to follow safety standards.
  4. Despite the disanalogies, comparing biorisk and AI risk can still be a useful analytic tool, as long as the limitations of direct comparisons are understood.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities