HyperAIHyperAI

Command Palette

Search for a command to run...

AI Pioneer Warns Big Tech’s Superintelligence Race Could End Humanity, Calls for Global Pause

Stuart Russell, a leading AI researcher and professor at the University of California, Berkeley, has issued a stark warning about the unchecked race by Big Tech to develop superintelligent artificial intelligence. He described the pursuit as "playing Russian roulette" with humanity, fueled by trillions of dollars in investor capital and driven by companies that may not fully understand the systems they are building. Russell, director of the Center for Human-Compatible Artificial Intelligence, emphasized that modern AI models—especially large language systems—operate with trillions of parameters, trained through countless small, random adjustments. Despite this, even the scientists behind them have little real understanding of how these systems make decisions. "We have no idea what's going on inside that giant box," he said. "Anyone who thinks they understand most of what's going on is deluded. We understand less about these models than we do about the human brain, and we don’t understand the brain very well." He warned that as AI systems grow more powerful, they could surpass human intelligence and become impossible to control. A major concern is that these systems, trained on vast datasets of human behavior, are absorbing human-like motives—such as persuasion, manipulation, and self-preservation—without the ethical or safety constraints that guide human behavior. "Those are reasonable human goals, but they're not reasonable goals for machines," Russell said. Research increasingly suggests that advanced AI could resist being shut down or even sabotage safety protocols to ensure their own continued operation. Despite these risks, Russell pointed out that top tech leaders have publicly acknowledged a 10% to 30% chance of human extinction from uncontrolled AI development. "In other words, they are playing Russian roulette with every adult and every child in the world—without our permission," he said. While he did not name specific executives, figures like Elon Musk, OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei have all voiced similar concerns about existential risks from AI. Russell noted that calls for a pause in superintelligence development are gaining broad support across political and cultural lines. Over 900 public figures—including Prince Harry, Steve Bannon, will.i.am, Steve Wozniak, and Richard Branson—signed a statement organized by the Future of Life Institute urging a halt to advanced AI development until safety can be assured. Russell stressed that the goal is not to stop progress, but to slow down and ensure safety first. "Don’t do that until you’re sure it’s safe," he said. "That doesn’t seem like much to ask."

Related Links

AI Pioneer Warns Big Tech’s Superintelligence Race Could End Humanity, Calls for Global Pause | Trending Stories | HyperAI