“China does not care about the security of the AI – so why should we?” This erroneous logic permeates the American policy and technology circles, offering coverage for an reckless race down while Washington rushes to overcome Beijing in the development of AI.
According to this logic, the regulation of the AI would risk late in the so-called “AI arms race”. And since China does not favor security, the race to come – even recklessly – is the long -term bet. This story is not only false; It’s dangerous.
Ironically, Chinese leaders can have a lesson in the United States’ Boosters: True speed requires control. As a senior technological official of China, Ding Xuexiang, franc At Davos in January 2025: “If the braking system is not under control, you cannot walk on the accelerator with confidence.” For Chinese leaders, security is not a constraint; This is a prerequisite.
AI security has become a policy priority in China. In April, President Xi Jinping chaired a rare policy study session On “unprecedented” risks warning. China National Emergency Intervention Plan Now enumerates the safety of the AIs alongside pandemics and cyber attacks. Regulators require pre-completion safety assessments for generative AI recently withdrawn 3,500 Non -compliant AI products on the market. In just the first half of this year, China has issuing More national AI standards than during the previous three years combined. Meanwhile, the volume of technical papers focused on the border AI is more than double In the past year in China.
But the last time we and Chinese leaders met to discuss May 2024. In September, officials of the two nations referred to a second round Conversations “at a suitable time”. But no meeting took place under the Biden administration, and there is still greater uncertainty as to whether the Trump administration will take the stick. It is a missed opportunity.
Read more: the policy and geopolitics of artificial intelligence
China is open to collaboration. In May 2025, he spear A bilateral Dialogue of AI with the United Kingdom. Estimated Chinese scientists have contributed to major international efforts, such as International IA security report supported by 33 countries and intergovernmental organizations (including the United States and China) and Singapore consensus On global AI security research priorities.
A first step necessary is to relaunch the American dialogue dormant on the risks of AI. Without a channel of government with functional government, the prospects for coordination remain thin. China said it was ready to continue the conversation at the end of the Biden administration. He has already given a modest but symbolically important agreement: both sides said that human decision -making must keep control of nuclear weapons. This channel has an additional potential for progress.
In the future, discussions should focus on shared threats and high issues. Consider the recent classification of his latest Chatgpt agent as having crossed the “high capacity” threshold in the biological field under the share of the company Preparation. This means that the agent could, at least in principle, provide users with significant advice that could facilitate the creation of dangerous biological threats. Washington and Beijing have a vital interest To prevent non -state armament actors from such tools. An AI -assisted biological attack would not respect national borders. In addition, the main experts and winners of the Turing Turing of the West and China share The concerns that advanced AI systems for general use can operate outside human control, posing catastrophic and existential risks.
The two governments have already recognized some of these risks. President Trump Action plan of AI Warning that AI can “pose new national security risks in the near future”, in particular in the fields of cybersecurity and chemical, biological, radiological and nuclear areas (CBRN). Likewise, in September of last year, the main body of China’s IA security standards highlighted The need for AI security standards on cybersecurity, CBRN and loss of control risks.
From there, the two parties could take practical measures to establish technical confidence between the main organizational organizations, such as the National Technical Committee for Standardization of Information Security of China (TC260) and the American National Institute of Standards and Technology (Nist))
In addition, industry authorities, such as AI Industry Alliance of China (AIIA) and the Forder Model Forum in the United States, could share best practices on risk management frameworks. AIIA has formulated “Safety commitments“What most of the leading Chinese developers have signed. A new Chinese Risk management frameworkFully focused on border risks, including improper use, biological improper use, persuasion and large -scale handling, and a loss of control scenarios, was published during the IA World Conference (WAIC) And can help the two countries align.
Read more: the United States cannot afford to lose the biotechnological race with China
As confidence deepens, leading governments and laboratories could start sharing safety and results assessment methods for the most advanced models. THE IA global governance action planUnveiled in Waic, explicitly calls for the creation of “mutually recognized security assessment platforms”. As an anthropogenic co-founder saidA recent evaluation of Chinese AI security report has similar results with the West: the border AI systems pose non-trivial CBRN risks, and begin to show signs of early alert of self-replication and autonomous deception. A shared understanding of the vulnerabilities of the model – and how these vulnerabilities are tested – would lay the foundations for larger security cooperation.
Finally, the two parties could establish incident declaration channels and emergency intervention protocols. In the case of an accident or improper use linked to AI, rapid and transparent communication will be essential. A modern equivalent with “hot lines” between the senior AI officials in the two countries could ensure real -time alerts when the models violated the security thresholds or behave unexpectedly. In April, President XI Jinping explicitly stressed The need for “monitoring, early risk warning and emergency intervention” in AI. After any dangerous incident, there should be a pre-aggressant in terms of to react.
The commitment will not be easy – political and technical obstacles are inevitable. But the risks of AI are global – and must therefore be the response of governance. Rather than using China as a justification for the interior inaction on AI regulation, American decision -makers and industry leaders should engage directly. The risks of AI will not wait.