A Global Movement Is Essential to Prohibit Superintelligent AI

October 29, 2025 by No Comments

Futuristic Globe with connection network

Last week, a diverse coalition of scientific, religious, and political leaders called for a global ban on the development of superintelligence—AI capable of surpassing human performance across all cognitive tasks. I was among the initial signatories, alongside Nobel laureates like Geoffrey Hinton; Yoshua Bengio, the world’s most-cited AI scientist; former advisor to President Donald Trump, Steve Bannon; former Joint Chiefs of Staff Chairman, Mike Mullen; and Prince Harry and Meghan, Duchess of Sussex.

What drives this collective effort? The urgent and escalating existential threat. Technology companies are investing billions of dollars to achieve superintelligence as rapidly as possible. No one possesses the knowledge to control AIs that are vastly more capable than any human, yet we are steadily approaching their creation, with many experts anticipating superintelligence within the next five years at the current pace.

This concern is why leading AI scientists warn that developing superintelligence could potentially lead to humanity’s extinction.

The necessity of a ban on superintelligence

Once we develop machines significantly more competent than us in all areas, we will most likely find ourselves at the mercy of the individual or country that controls them. Alternatively, we could be at the mercy of the superintelligent machines themselves, as no nation, company, or person currently understands how to control them. Theoretically, a superintelligent AI would pursue its own objectives, and if these goals are incompatible with sustaining human life, our annihilation would follow.

To compound the issue, AI developers do not fully comprehend how current powerful AI systems operate. Unlike bridges or power plants, which are engineered to precise human specifications, today’s AI systems are trained from vast datasets through processes their own creators cannot interpret. Even Anthropic CEO Dario Amodei admitted that we only “understand 3% of how they work.”

Despite this inherent danger, superintelligence remains the core objective of leading AI companies: OpenAI, Anthropic, Google DeepMind, Meta, xAI, and DeepSeek. Given the skyrocketing valuations of these companies, they are unlikely to cease their efforts voluntarily.

Governments worldwide must intervene before it is too late. However, the international political climate is not encouraging. We are living in an era of escalating geopolitical tension, rife with intense competition between the U.S. and China. Countries are rushing to invest billions in data centers to power AI at a time when developing and deploying dangerous AI systems faces fewer regulations than opening a new restaurant or constructing a house.

How to implement a superintelligence ban

In this challenging climate, is an international ban on the development of superintelligence truly achievable?

Yes, because we have successfully implemented such global prohibitions before.

In 1985, the world discovered there was a hole in the ozone layer above Antarctica, thanks to three scientists from the British Antarctic Survey. The culprits for this atmospheric damage were chlorofluorocarbons (CFCs), ubiquitous industrial chemicals. Unless action was taken, the hole would continue to expand, and millions would suffer from skin cancer or turn blind due to the lack of UV protection.

Instead, millions mobilized. Scientists made the threat tangible with colored satellite pictures and clear discussions of the health consequences. NGOs orchestrated boycotts of huge brands and directed thousands of concerned citizens to write protest letters. Schools worldwide ran educational programs, and the UN endorsed public awareness campaigns.

In 1987, a mere two years after the ozone hole was publicly revealed, every existing country signed the Montreal Protocol. Signed during the Cold War, the Montreal Protocol demonstrates that it is possible to reach swift and decisive international agreements even amidst significant geopolitical tensions.

A key factor was that the ozone hole endangered nearly everybody in the world. It was not an externality pushed by some people onto others, but something that everyone would suffer from. Superintelligence is a similarly universal threat: the loss of control of AI means that even those who develop it will not be spared from its dangers. The extinction risk from superintelligence thus has the potential to cut through every division. It can unite people across political parties, religions, nations, and ideologies. Nobody wants their life, their family, their world to be destroyed.

When people learn about superintelligence and the extinction risk it poses, many recognize the danger and begin to worry about it. Like with the ozone hole, this worry must be catalyzed into civic engagement, building a global movement that works with governments to make a prohibition on superintelligence a reality.

Unfortunately, most lawmakers simply still do not know about the threat of superintelligence or its urgency, and AI companies are now deploying massive lobbying efforts to crush attempts to regulate AI.

The best counterbalance to this gargantuan lobbying effort is for lawmakers to hear from their constituents what they truly think about superintelligence. Very often, lawmakers will find that most of their voters want them to say “no” to an existential threat, and “yes” to a future where humanity survives and thrives.

In an era of declining political engagement and increased partisanship, prohibiting superintelligence is a common-sense issue that unites people across the political spectrum.

As with the depletion of the ozone layer, everyone stands to lose from the development of superintelligence. We know the movement to avoid this fate can be built.

The only question left is: can we build it fast enough?