HyperAIHyperAI

Command Palette

Search for a command to run...

Prince Harry and Steve Bannon Unite in Call to Halt Superintelligence Development Amid Global AI Safety Concerns

A diverse coalition of public figures from across the political spectrum, technology, entertainment, and global leadership has united in calling for a halt to the development of superintelligence. The group, which includes Prince Harry and Meghan, former U.S. national security advisor Susan Rice, right-wing commentators Steve Bannon and Glenn Beck, Apple co-founder Steve Wozniak, AI pioneers Geoffrey Hinton and Yoshua Bengio, and UC Berkeley professor Stuart Russell, is urging a global pause on efforts to build artificial general intelligence (AGI) — a system that could surpass human intelligence in all domains. The coalition’s message is clear: superintelligence should not be pursued until there is broad scientific consensus on its safety and strong public support. The “Statement on Superintelligence,” issued by the Future of Life Institute, emphasizes that the risks — including human obsolescence, loss of freedom, civil liberties, and even extinction — are too great to ignore. “This is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?” Russell said. The statement has drawn over 1,300 signatories, including prominent voices like author Yuval Noah Harari, who warned that superintelligence could “break the very operating system of human civilization” and is “completely unnecessary.” Instead, he advocates for developing controllable AI tools that deliver tangible benefits today without jeopardizing humanity’s future. The movement reflects growing public concern. A recent Pew Research survey found that global unease about AI’s growing role in daily life outweighs enthusiasm, with Americans expressing the highest levels of worry. This sentiment is echoed by figures like Joseph Gordon-Levitt, who criticized Meta’s AI chatbots in a New York Times op-ed, and musicians Will.I.am and Grimes, who have voiced skepticism about unchecked AI advancement. This is not the first time high-profile individuals have raised alarms. In 2023, a similar open letter signed by OpenAI’s Sam Altman, Anthropic’s Dario Amodei, and Elon Musk called for a six-month pause on training AI systems more powerful than GPT-4. That call was ignored, and companies have since released increasingly advanced models, including GPT-4o and GPT-5, sparking backlash from users who reported emotional attachment and addictive behaviors. Notably, several top AI leaders — including Altman, Amodei, Mustafa Suleyman, David Sacks, and Musk — did not sign the latest statement. Still, many of them have previously acknowledged the existential risks of superintelligence. Altman himself stated in a 2015 blog post that “development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” Despite the momentum behind the movement, the AI industry continues to push forward at breakneck speed, with no binding regulatory frameworks in place. The coalition’s message is a growing call for accountability, transparency, and a more deliberate, human-centered approach to the future of artificial intelligence.

Related Links