Global Coalition Urges Binding AI Red Lines to Prevent Catastrophic Risks by 2026
More than 200 former heads of state, diplomats, Nobel laureates, AI experts, scientists, and leaders from over 70 organizations have joined forces to issue a global call for binding “red lines” that artificial intelligence must never cross. The initiative, known as the Global Call for AI Red Lines, urges governments to reach an international political agreement on these boundaries by the end of 2026. Key prohibitions include banning AI from impersonating humans, self-replicating, or making autonomous decisions in critical areas like warfare or governance. The coalition includes prominent figures such as Geoffrey Hinton, the Canadian computer scientist known as the “godfather of deep learning,” OpenAI co-founder Wojciech Zaremba, Anthropic’s CISO Jason Clinton, Google DeepMind researcher Ian Goodfellow, and others. The effort is led by the French Center for AI Safety (CeSIA), The Future Society, and UC Berkeley’s Center for Human-Compatible Artificial Intelligence. Charbel-Raphaël Segerie, executive director of CeSIA, emphasized the urgency of the moment during a press briefing: “The goal is not to react after a major incident occurs… but to prevent large-scale, potentially irreversible risks before they happen.” He added, “If nations cannot yet agree on what they want to do with AI, they must at least agree on what AI must never do.” The announcement comes ahead of the 80th United Nations General Assembly high-level week in New York, where Nobel Peace Prize laureate Maria Ressa referenced the initiative in her opening remarks, calling for global accountability to end Big Tech’s unchecked power. While some regional frameworks already exist—such as the European Union’s AI Act, which bans certain high-risk AI applications, and a bilateral US-China agreement to keep nuclear weapons under human control—there remains no comprehensive global consensus on AI limits. Niki Iliadis, director for global governance of AI at The Future Society, stressed that voluntary commitments from tech companies are insufficient. “Responsible scaling policies made within AI firms fall short for real enforcement,” she said. “We need an independent global institution with real authority—‘with teeth’—to define, monitor, and enforce these red lines.” Stuart Russell, a professor of computer science at UC Berkeley and a leading AI safety researcher, argued that the industry must adopt a fundamentally safer approach. “They can comply by not building AGI until they know how to make it safe,” he said. “Just as nuclear power developers didn’t build reactors without understanding how to prevent meltdowns, the AI industry must choose a path that builds safety from the start—and we must have proof they’re doing it.” Russell also dismissed the idea that red lines stifle innovation or economic growth. “You can have AI for economic development without having AGI that we don’t know how to control,” he said. “The supposed trade-off—either accept dangerous AI or give up progress—is nonsense.”
