Linkpost for an article I wrote for Lawfare today, building on some previous Founders Pledge work on hotlines.
Here is a relevant excerpt:
... At the cutting edge of civilian AI development—so-called frontier models—the risks may also be catastrophic. As leading AI scientists participating in the International Dialogues on AI Safety recently put it in their 2024 Beijing consensus statement, “Unsafe development, deployment, or use of AI systems may pose catastrophic or even existential risks to humanity within our lifetimes.”
Many risk mitigation measures will rightfully focus on prevention. As part of an effective layered defense, which builds multiple layers of defenses in case the first layer fails, prevention must also be coupled with mechanisms to respond to and contain incidents if they do occur. (It may be a question of when, not if, as the thousands of entries in the AI Incident Database illustrate.)
Crisis communications tools like hotlines can help leaders manage incidents quickly and at the highest levels. Consider, for example, a near-future scenario in which one of the thousands of autonomous systems that the U.S. intends to deploy malfunctions or loses communications with the rest of its swarm while in the South China Sea; the U.S. military may wish to let the People’s Liberation Army (PLA) know that whatever this system is doing, it is unintended. Or consider a scenario in which China learns that a third state’s leading civilian AI lab has trained highly advanced AI, but that this system has started behaving strangely recently, accumulating resources on the world’s stock markets and attempting to gain access to systems linked with critical infrastructure. In this situation, China may wish to call on a hotline, and the two countries may have only minutes to work together to avert a catastrophe...