The World Is Ill-Prepared for an AI Emergency

December 29, 2025 by No Comments

Global AI Emergency

Imagine waking up to see the internet flickering, card payments failing, ambulances going to the wrong addresses, and emergency broadcasts you’re no longer certain you can rely on. Whether triggered by a model malfunction, criminal use, or an intensifying cyber shock, an AI-fueled crisis can spread across borders rapidly.

In numerous instances, the initial indicators of an AI emergency would probably appear as a common outage or security breakdown. Only afterward, if at all, would it become evident that AI systems had significantly contributed.

Some governments and companies have started to establish guardrails to handle such an emergency. The European Union AI Act, the U.S. National Institute of Standards and Technology risk framework, the G7 Hiroshima process, and international technical standards all seek to prevent harm. Cybersecurity agencies and infrastructure operators also have procedures for hacking attempts, outages, and routine system failures. What’s lacking isn’t the technical manual for fixing servers or restoring networks. It’s the strategy to prevent social panic and a breakdown in trust, diplomacy, and basic communication if AI is at the heart of a rapidly unfolding crisis. 

Stopping an AI emergency is only half the task. The other half of AI governance is preparedness and response. Who determines when an AI incident has turned into an international emergency? Who addresses the public when false messages are overwhelming their feeds? Who maintains communication channels between governments if regular lines are disrupted?

Governments can and must develop AI emergency response plans before it’s too late. In forthcoming research based on disaster law and lessons from other global emergencies, I analyze how existing international regulations already include the elements of an AI playbook. Governments already have the legal tools, but now need to agree on how and when to use them. We don’t need new, complex institutions to oversee AI—we just need governments to plan ahead.

How to prepare for an AI emergency

We’ve witnessed this general governance model before. The International Health Regulations enable the World Health Organization to declare a global health emergency and coordinate action. Nuclear accident treaties mandate rapid notification when radiation could spread across borders. Telecommunications agreements remove legal obstacles to quickly activating emergency satellite equipment. Cybercrime conventions set up 24/7 contact points for police forces to cooperate promptly. The lessons demonstrate that pre-agreed triggers, designated coordinators, and swift communication channels save time in an emergency.

An AI emergency requires the same underpinnings. Start with a shared definition. An AI emergency should be an extraordinary event caused by the development, use, or malfunction of AI that poses a risk of severe cross-border harm and exceeds any single country’s ability to handle. Importantly, it must also cover scenarios where AI involvement is only suspected or is one of several possible causes so that governments can act before forensic certainty is established, if it is established at all. Most incidents will never reach that level. Agreeing on the definition in advance helps prevent paralysis in the first crucial hours.

Next, governments need a practical playbook. The first component of this playbook should be defining a common set of triggers and a basic severity scale so officials know when to escalate from a routine incident to an international alert, including criteria for determining when AI involvement is only credibly suspected rather than conclusively proven. The second chapter should involve naming a global coordinator who can assemble quickly, supported by technical experts, law enforcement partners, and disaster specialists. The third part should be establishing interoperable incident reporting systems so countries and companies can exchange essential information in minutes, not days. Then, we must create crisis communication protocols using authenticated, analog methods like radio. Finally, we must draft a clear list of continuity and containment measures. These might include slowing down high-risk AI services or switching critical infrastructure to manual control.

Structuring AI emergency preparedness

So, who should oversee these AI emergency preparedness initiatives? My answer: the United Nations.

Placing this system within the UN framework is important for several reasons. One is that an AI emergency won’t respect alliances. A UN-based mechanism offers broader inclusion and reduces duplication among competing coalitions. It provides technical assistance to countries without advanced AI capabilities so the burden isn’t shouldered by a few major powers. It adds legitimacy and accountability. Extraordinary powers must be lawful, proportionate, and subject to review, especially when they involve digital networks used by billions of people.

This international framework needs to be complemented by domestic actions governments can take now. Every country should designate a 24/7 AI emergency contact point. Emergency powers should be reviewed to ensure they cover AI infrastructure. Sector plans should align with basic incident management and business continuity standards. Joint exercises should simulate disinformation campaigns, model failures, and cross-sector outages. Migration to post-quantum cryptography should be prioritized before a hostile attack compels such an update. Governments should also register trusted senders and alert templates so messages can still reach citizens when systems are unstable.

These precautions are necessary. Reported AI-related cyberattacks are on the rise, and many countries have already faced smaller-scale outages, data manipulation attempts, and disinformation surges that foreshadow what a larger event could be like. Moreover, a rapidly evolving AI failure could combine with today’s hyper-connected infrastructure to create a crisis no single country can tackle alone.

This isn’t a call for a new global super agency. It’s a call to integrate existing elements into a cohesive response. We need an AI emergency playbook that borrows these tools and practices them.

The measure of AI governance will be how we respond on our darkest day. Currently, the world has no plan for an AI emergency—but we can create one. We must build it now, test it, and enshrine it in law with safeguards, because once the next crisis hits, it will be too late.