Hunger Strike Outside Anthropic Demands Pause on AGI Race Amid Growing AI Safety Concerns
On his 17th day without food, Guido Reichstadter said he felt fine—slightly slower, but otherwise okay. Since August 31, he has been standing daily outside Anthropic’s San Francisco headquarters from 11 a.m. to 5 p.m., holding a chalkboard that reads “Hunger Strike: Day 15,” calling on the AI company to halt its pursuit of artificial general intelligence, or AGI—the hypothetical point at which AI matches or exceeds human cognitive abilities. Reichstadter sees AGI as an existential threat, one that companies are rushing toward without adequate caution. “Trying to build AGI—human-level or beyond systems, superintelligence—is the goal of all these frontier companies,” he told The Verge. “And I think it’s insane. It’s risky. Incredibly risky. And I think it should stop now.” He views his hunger strike as the most direct way to force AI leaders to confront the dangers they’re ignoring. He cited a 2023 interview with Anthropic CEO Dario Amodei, in which Amodei estimated a 10% to 25% chance of a catastrophic failure on a civilization-scale. Reichstadter dismissed the idea that companies can simply be “responsible custodians” of such technology as a myth and a self-serving narrative. He believes those who understand the risks—especially those working in the industry—have a moral duty to act. “I’ve got two kids,” he said. “I’m trying to fulfill my responsibility as an ordinary citizen who respects the lives and wellbeing of others. I’m not asking for a miracle. I’m asking for common sense.” Anthropic did not respond to a request for comment. Reichstadter said he waves at security guards each day and notices employees avoiding eye contact. Still, he believes at least one worker has shared his fears. He hopes to inspire AI staff to see themselves not as tools of their companies, but as individuals with a deeper duty to humanity—especially since they are shaping what he calls “the most dangerous technology on Earth.” His concerns are echoed by many in the AI safety community, though they remain divided on specifics. One thing unites them: the current trajectory of AI development is deeply troubling. Reichstadter first became aware of human-level AI during college, decades ago, when it seemed distant. But the launch of ChatGPT in 2022 changed everything. He’s particularly alarmed by AI’s role in fueling authoritarianism in the U.S. and its potential for ethical misuse. “I’m concerned about my society, my family, my children’s future,” he said. “I’m concerned about what AI is doing to our world—and the real risk of catastrophe.” In recent months, he’s escalated his activism. In February, he joined a protest that chained shut OpenAI’s San Francisco offices, leading to arrests. On September 2, he delivered a handwritten letter to Anthropic’s security desk, later posting it online. In it, he asked Amodei to stop pursuing AGI and to lead a global pause in frontier AI development. If Amodei refuses, Reichstadter asked why. “For the sake of my children and with the urgency and gravity of our situation in my heart, I have begun a hunger strike,” he wrote. “I hope he has the decency to answer,” Reichstadter said. “It’s one thing to think abstractly about killing people. It’s another to face one of your potential victims and explain why.” Soon after, others followed. Two people began similar hunger strikes in London outside Google DeepMind’s office, and one joined in India, fasting live-streamed. Michael Trazzi, one of the London protesters, stopped after seven days due to health concerns but continues to support the other hunger striker. Trazzi has been thinking about AI risks since 2017 and sent a letter to DeepMind CEO Demis Hassabis, urging a global pause in superintelligence development if all major companies agree. He believes AI’s risks demand strong regulation. “If it weren’t for the danger, I wouldn’t be so pro-regulation,” he said. “But some things in the world are moving in the wrong direction by default. AI is one of them.” Google DeepMind’s communications director, Amanda Carl Pratt, said the company prioritizes safety and responsible AI development, emphasizing its potential to benefit billions. But neither Hassabis nor Amodei has responded to the letters. Trazzi said the hunger strike has sparked conversations with tech workers, including one at Meta who questioned why only Google was being targeted. Another DeepMind employee admitted extinction from AI is more likely than not—but still works there because it’s the most safety-conscious company. Reichstadter and Trazzi have not yet received answers. But they remain hopeful—believing that their actions might lead to a meeting, a response, or even a shift in course. “We are in an uncontrolled, global race to disaster,” Reichstadter said. “If there’s a way out, it will come from people being honest and saying, ‘We’re not in control. We need help.’”
