HyperAIHyperAI

Command Palette

Search for a command to run...

Over 1,000 Signatories Call for a Global Pause on Superintelligence Development

A new open letter, led by the non-profit Future of Life Institute, has drawn over 1,000 public figures in a call to pause the development of superintelligence until there is broad scientific consensus on its safety and strong public support. The letter urges leading AI labs to halt work on systems capable of outperforming humans in nearly all cognitive tasks—what many refer to as artificial general intelligence (AGI)—until such safeguards are in place. The signatories represent a wide cross-section of global influence, including AI pioneers Geoffrey Hinton, Yoshua Bengio, and Turing Award winner Andrew Chi-Chih Yao, as well as tech icons like Apple co-founder Steve Wozniak and Virgin founder Richard Branson. The list also includes high-profile political and cultural figures such as Prince Harry, former Trump strategist Steve Bannon, and former Obama National Security Advisor Susan Rice. The letter warns that the race to build superintelligent systems poses existential risks to humanity, including economic upheaval, loss of human control, erosion of civil liberties, and even the potential for human extinction. Historian Yuval Noah Harari has gone as far as to say that superintelligence could dismantle the “operating system of human civilization.” The urgency is backed by public sentiment: a recent poll found that 73% of Americans support strong regulation of advanced AI, and 64% agree that superintelligence should not be developed before it is proven safe. “Ninety-five percent of Americans don’t want a race to superintelligence, and experts agree,” said Max Tegmark, president of the Future of Life Institute, summarizing the findings. Yet, the impact of this latest appeal remains uncertain. The same group issued a high-profile call in early 2023 to pause AI development for six months—efforts that had little effect on the industry’s momentum. Major labs like Meta, Google DeepMind, and OpenAI continue to push forward with increasingly powerful models, and some experts believe superintelligence could emerge as early as the late 2020s. The letter has also highlighted a growing rift within the AI community. On one side are figures like Hinton and Bengio, who express deep concern about existential risks. On the other, technologists like Meta’s chief AI scientist Yann LeCun dismiss these warnings as inconsistent, accusing some critics of advocating caution while simultaneously building similar systems. LeCun argues that current large language models are overhyped and that the field is heading down a dead end, not a path to catastrophe. Stuart Russell, author of Artificial Intelligence: A Modern Approach, cautions that inflated expectations could lead to a crash in confidence—similar to the dot-com bubble. He clarifies that the group is not calling for a full ban or pause, but rather for robust safety measures to be in place before further development proceeds. Even OpenAI’s board member Bret Taylor has drawn parallels between today’s AI boom and the early days of the internet, suggesting the current excitement may be overblown. While figures like Bengio advocate for AI systems that are “inherently incapable of harming humans” and Prince Harry insists that AI’s future should serve humanity—not replace it—the debate remains unresolved. As top labs continue to scale up their models, the conversation about the ultimate trajectory of artificial intelligence is only just beginning.

Related Links