As part of our Governance Recommendations Research Program, Convergence Analysis has compiled a first-of-its-kind report summarizing the state of the AI regulatory landscape as of May 2024. We provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we discuss the relevant context behind each topic and conduct a short analysis.
This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. Our mission is to meaningfully contribute to the advancement of critical & foundational governance policies that will serve to mitigate future risk from AI systems.
You can read the full report on our website: 2024 State of AI Regulatory Landscape
Links to Report Sections
- Structure of AI Regulations
- AI Evaluation & Risk Assessments
- AI Model Registries
- AI Incident Reporting
- Open-Source AI Models
- Cybersecurity of Frontier AI Models
- AI Discrimination Requirements
- AI Disclosures
- AI and Chemical, Biological, Radiological, & Nuclear Hazards
Report Introduction
In the last decade, a growing expert consensus has argued that advanced AI poses numerous threats to society. These threats include widespread job loss, algorithmic bias, increasingly convincing misinformation and disinformation, social manipulation, cybersecurity attacks, and even catastrophic and existential threats from AI-engineered chemical and biological weapons.
Many are urgently calling for legislation and regulation focused on AI to reduce these threats, and governments are responding. In the last year, the US Executive Branch, the People’s Republic of China, and the European Union have enacted hundreds of pages of directives, legislation, and regulation focused on AI and the risks it currently poses and will pose in the near future. In this report, we’ve chosen to focus primarily on these three bodies for a comparative analysis of current regulations. These three aren’t the only examples of existing AI governance efforts, but they are the most prominent and globally influential, with jurisdiction over nearly all leading AI labs and AI infrastructure.
Designing and enacting future governance to tackle the challenges of AI will require a thorough understanding of existing governance and their scope, their strengths, their gaps, and their flaws. To our knowledge, there isn’t currently a detailed comparative analysis of these pieces of governance, nor a topic-by-topic breakdown of their scope and content. In this report, we hope to fill those gaps and provide a solid foundation for future governance recommendations.
We start with an overview of different ways to structure AI policy and how different methods of classifying AI technologies influence the scope and shape of legislation. Then, we’ll proceed topic by topic: we’ll introduce a specific topic of AI governance, explore its context and why it warrants legislation, and then survey the existing US, EU, and Chinese governance on that topic. We’ll conclude each section with our analysis of the current policy, identifying gaps and opportunities, and discuss our policy expectations for the coming 1-5 years.
This report is primarily meant to be read on a topic-by-topic basis, as to be used as a resource for individuals looking to better understand specific topics in AI regulation. It is designed to be consumed in smaller portions rather than read in its entirety in a single session. As this report will gradually become outdated, we also suggest that readers view our most recently updated reports on our website.
We hope that this report provides a firm foundation and reference for policymakers, thought leaders, and other parties interested in getting up to speed on the current state of AI governance.
This is a really complex space with lots of moving parts; very cool to see how you've compiled/analyzed everything! Haven't finished going through your report yet, but it looks awesome :)