China Rejects International AI Weapons Development Guidelines
China declined to sign an international agreement, supported by over 60 countries including the U.S., aimed at setting ethical guidelines for the use of artificial intelligence in military applications. This decision came during the Responsible Artificial Intelligence in the Military Domain (REAIM) summit in South Korea, where approximately 90 nations participated, but roughly a third did not endorse the non-binding proposal.
AI expert Arthur Herman, senior fellow and director of the Quantum Alliance Initiative with the Hudson Institute, suggested that while China’s refusal to sign the agreement might be attributed to its general reluctance towards multilateral agreements, it’s not necessarily a cause for alarm. He explained that China often views such agreements as attempts to restrict its military capabilities, particularly its AI advancements.
“What it boils down to … is China is always wary of any kind of international agreement in which it has not been the architect or involved in creating and organizing how that agreement is going to be shaped and implemented,” he said. “I think the , all of these multilateral endeavors, as ways in which to try and constrain and limit China’s ability to use AI to enhance its military edge.”
Herman highlighted that the summit and the blueprint, supported by numerous nations, aim to ensure human control over AI systems, especially in military and defense contexts. This is crucial given the rapid decision-making capabilities of AI.
“The algorithms that drive depend a lot on how fast they can go,” he said. “[They] move quickly to gather information and data that you then can speed back to command and control so they can then make the decision.
“The speed with which AI moves … that’s hugely important on the battlefield,” he added. “If the decision that the AI-driven system is making involves taking a human life, then you want it to be one in which it’s a human being that makes the final call about a decision of that sort.”
Countries leading in AI development, such as the U.S., have emphasized the importance of maintaining human oversight to prevent accidental casualties and avert machine-driven conflicts.
The summit, co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, was the second of its kind, following a similar gathering in the Netherlands last year. The reasons behind China’s, and approximately 30 other countries’, decision to not endorse the blueprint remain unclear, especially considering China’s support for a similar “call to action” during the previous summit.
When asked about the summit during a press conference, Chinese Foreign Ministry spokesperson Mao Ning stated that China had sent a delegation to the event and outlined its AI governance principles. She referenced the “Global Initiative for AI Governance” proposed by Chinese President Xi Jinping, emphasizing China’s systemic approach to AI governance.
While Mao did not explicitly state the reasons behind China’s non-endorsement of the blueprint, she affirmed China’s commitment to collaborating with other nations on AI development for the benefit of humanity.
Herman cautioned that while nations like the U.S. and its allies strive to establish international agreements for safeguarding AI practices in military applications, they are unlikely to effectively prevent nations like China, Russia, and Iran from developing potentially harmful technologies.
“When you’re talking about nuclear proliferation or missile technology, the best restraint is deterrence,” the AI expert explained. “You force those who are determined to push ahead with the use of AI – even to the point of basically using AI as kind of [a] automatic kill mechanism, because they see it in their interest to do so – the way in which you constrain them is by making it clear, if you develop weapons like that, we can use them against you in the same way.
“You don’t count on their sense of altruism or high ethical standards to restrain them, that’s not how that works,” Herman added.
Reuters contributed to this report.