This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race for Machine Superintelligence. Consider subscribing to stay up to date with my work.
An influential congressional commission is calling for a militarized race to build superintelligent AI based on threadbare evidence
The US-China AI rivalry is entering a dangerous new phase.
Earlier today, the US-China Economic and Security Review Commission (USCC) released its annual report, with the following as its top recommendation:
Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task.
As someone observed on X, it’s telling that they didn’t call it an “Apollo Project.”
One of the USCC Commissioners, Jacob Helberg, tells Reuters that “China is racing towards AGI ... It's critical that we take them extremely seriously.”
But is China actually racing towards AGI? Big, if true!
The report clocks in at a cool 793 pages with 344 endnotes. Despite this length, there are only a handful of mentions of AGI, and all of them are in the sections recommending that the US race to build it.
In other words, there is no evidence in the report to support Helberg’s claim that "China is racing towards AGI.”
Nonetheless, his quote goes unchallenged into the 300-word Reuters story, which will be read far more than the 800-page document. It has the added gravitas of coming from one of the commissioners behind such a gargantuan report.
I’m not asserting that China is definitively NOT rushing to build AGI. But if there were solid evidence behind Helberg’s claim, why didn’t it make it into the report?
Helberg has not replied to a request for comment.
As the report notes, the CCP has long expressed a desire to lead the world in AI development. But that’s not the same thing as deliberately trying to build AGI, which could have profoundly destabilizing effects, even if we had a surefire way of aligning such a system with its creators interests (we don’t).
I was hoping the report would marshal the strongest evidence for Helberg’s claim, but I found remarkably little analysis about China’s AI intentions, beyond summaries of the country’s desire to develop the industry to be robust to American containment efforts.
What China has said about AI
In July 2017, China’s State Council published an important document called “A Next Generation Artificial Intelligence Development Plan,” which was translated by the New America think tank. The plan serves as blueprint for the country’s AI industrial policy and includes the often-quoted goal of leading the world in AI by 2030.
Does this mean creating a digital superintelligence that will permanently alter the global balance of power?
Not exactly.
Here’s how the goal is introduced:
Third, by 2030, China’s AI theories, technologies, and applications should achieve world-leading levels, making China the world’s primary AI innovation center, achieving visible results in intelligent economy and intelligent society applications, and laying an important foundation for becoming a leading innovation-style nation and an economic power.
The document includes targets for an AI industry valued at 1 trillion RMB by 2030 (about $190 billion in today's dollars). For context, Statista projects the 2030 global AI market to be $827 billion, with the US at $224 billion and China at $155 billion.
In other words, China's 2030 targets from 2017 are only a bit more optimistic than current market projections — and would actually fall short of global dominance. These numbers suggest normal industrial growth ambitions, not the kind of revolutionary technological breakthrough implied by AGI. (Some AI forecasters think AGI could drive annual per capita GDP growth well above 100%.)
Also in the plan is the aspiration that China “will have constructed more comprehensive AI laws and regulations, and an ethical norms and policy system.”
A few days ago, Joe Biden and Xi Jinping met in Lima, Peru, where the Chinese president reportedly called for “more dialogue and cooperation” and discussed AI as a “global challenge” in the same vein as climate change.
And in July, the CCP released a document that China AI expert Matt Sheehan said is “the clearest indication we've seen that concerns about AI safety have reached top CCP leadership, and that they intend to take some action on this.” Sheehan has previously written that “Beijing is leading the way in AI regulation,” something Anthropic’s policy chief has also acknowledged.
Obviously, we should take all of this with a grain of salt. World leaders have an incentive to exaggerate their willingness to play ball and act in the global interest, and the significance of Chinese AI regulations isn’t totally clear. But policymakers should at least be aware of the large gap between what China says and does when it comes to AI, and what hawks assert the country is doing or planning (especially when they don’t cite evidence).
Only one superpower has a government commission publicly calling for a militarized race to build superintelligent AI (with no plan for how to control it), and it’s not China.
Revealing technical errors
There are also some indications that the report authors were a bit out of their depths when it comes to AI.
The report repeatedly misidentifies basic technical concepts. It refers to “ChatGPT-3” multiple times, despite no such product existing — ChatGPT launched using GPT-3.5, an improved version of GPT-3. When comparing model performance, the authors confuse ChatGPT (an interface) with the underlying models like GPT-3.5 and GPT-4. These aren't just semantic distinctions when you're explicitly comparing the capabilities of different AI systems.
The confusion runs deeper. The report claims “OpenAI, a closed model, cut off China's access to its services” — but OpenAI, you might realize, is a company, not a model. It also states that “Generative AI models can transmit algorithms into text, images, audio, video, and code.” This appears to be a garbled paraphrase of a McKinsey definition (itself not particularly precise) about AI generating different types of content.
These may seem like nitpicks, but they reveal a concerning lack of technical literacy in a report meant to guide national AI policy. And speaking as someone who worked at McKinsey, it's not where I'd go for technical definitions of AI.
Most tellingly, the definition they offer for AGI has problems that don’t require any technical expertise to catch:
AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task.
Is AGI just something that is “as good” as humans or something that “surpasses” the smartest of us? This isn’t some obscure definition buried deep in the report. It’s literally the second sentence in their top recommendation. It’s also the goal the authors think the US should mobilize a wartime effort to meet. Taken as written, it’s not clear what it would even mean to achieve it! (Setting aside the usual difficulty of actually defining and measuring AGI.)
Conclusion
We’ve seen this all before. The most hawkish voices are amplified and skeptics are iced out. Evidence-free claims about adversary capabilities drive policy, while contrary intelligence is buried or ignored.
In the late 1950s, Defense Department officials and hawkish politicians warned of a dangerous 'missile gap' with the Soviet Union. The claim that the Soviets had more nuclear missiles than the US helped Kennedy win the presidency and justified a massive military buildup. There was just one problem: it wasn't true. New intelligence showed the Soviets had just four ICBMs when the US had dozens.
Now we're watching the birth of a similar narrative. (In some cases, the parallels are a little too on the nose: OpenAI’s new chief lobbyist, Chris Lehane, argued last week at a prestigious DC think tank that the US is facing a “compute gap.”)
The fear of a nefarious and mysterious other is the ultimate justification to cut any corner and race ahead without a real plan. We narrowly averted catastrophe in the first Cold War. We may not be so lucky if we incite a second.
If you enjoyed this post, please subscribe to The Obsolete Newsletter. You can also find my accompanying Twitter thread here.
Copied from my LW comment, since this is probably more of an EAF discussion:
This is really important pushback. This is the discussion we need to be having.
Most people who are trying to track this believe China has not been racing toward AGI up to this point. Whether they embark on that race is probably being determined now - and based in no small part on the US's perceived attitude and intentions.
Any calls for racing toward AGI should be closely accompanied with "and of course we'd use it to benefit the entire world, sharing the rapidly growing pie". If our intentions are hostile, foreign powers have little choice but to race us.
And we should not be so confident we will remain ahead if we do race. There are many routes to progress other than sheer scale of pretraining. The release of DeepSeek r1 today indicates that China is not so far behind. Let's remember that while the US "won" the race for nukes, our primary rival had nukes very soon after - by stealing our advancements. A standoff between AGI-armed US and China could be disastrous - or navigated successfully if we take the right tone and prevent further proliferation (I shudder to think of Putin controlling an AGI, or many potentially unstable actors).
This discussion is important, so it needs to be better. This pushback is itself badly flawed. In calling out the report's lack of references, it provides almost none itself. Citing a 2017 official statement from China seems utterly irrelevant to guessing their current, privately held position. Almost everyone has updated massively since 2017. (edit: It's good that this piece does note that public statements are basically meaningless in such matters.) If China is "racing toward AGI" as an internal policy, they probably would've adopted that recently. (I doubt that they are racing yet, but it seems entirely possible they'll start now in response to the US push to do so - and the their perspective on the US as a dangerous aggressor on the world stage. But what do I know - we need real experts on China and international relations.)
Pointing out the technical errors in the report seems irrelevant to harmful. You can understand very little of the details and still understand that AGI would be a big, big deal if true — and the many experts predicting short timelines could be right. Nitpicking the technical expertise of people who are essentially probably correct in their assessment just sets a bad tone of fighting/arguing instead of having a sensible discussion.
And we desperately need a sensible discussion on this topic.
Pasted from LW:
Hey Seth, appreciate the detailed engagement. I don't think the 2017 report is the best way to understand what China's intentions are WRT to AI, but there was nothing in the report to support Helberg's claim to Reuters. I also cite multiple other sources discussing more recent developments (with the caveat in the piece that they should be taken with a grain of salt). I think the fact that this commission was not able to find evidence for the "China is racing to AGI" claim is actually pretty convincing evidence in itself. I'm very interested in better understanding China's intentions here and plan to deep dive into it over the next few months, but I didn't want to wait until I could exhaustively search for the evidence that the report should have offered while an extremely dangerous and unsupported narrative takes off.
I also really don't get the error pushback. These really were less technical errors than basic factual errors and incoherent statements. They speak to a sloppiness that should affect how seriously the report should be taken. I'm not one to gatekeep ai expertise, but idt it's too much to expect a congressional commission with a top recommendation to commence in a militaristic AI arms race to have SOMEONE read a draft who knows that chatgpt-3 isn't a thing.