AI Doomsday Fears Shield Companies from Accountability, Professor Warns
AI doomsday narratives are serving as a distraction that enables companies to avoid accountability for the tangible harms their technologies are already causing, according to Tobias Osborne, a professor of theoretical physics at Leibniz Universität Hannover and co-founder of the science communication firm Innovailia. In a recent essay, Osborne argued that the widespread focus on hypothetical future catastrophes—such as superintelligent machines taking over or triggering civilizational collapse—has diverted attention from the very real and measurable damage occurring today. He wrote that while policymakers and tech leaders debate the possibility of an AI apocalypse, the industry continues to inflict harm on workers, creators, and society at large. “The apocalypse isn’t coming,” Osborne stated. “Instead, the dystopia is already here.” Osborne explained that framing AI companies as defenders against existential threats has allowed them to be treated more like national security entities than commercial product developers. This shift grants them regulatory leniency, secrecy, and public support while reducing their legal exposure. As a result, companies can externalize the costs of their technologies—such as data exploitation and environmental damage—while still reaping the benefits. He pointed to several under-recognized harms that are being sidelined by the hype around futuristic risks. These include the exploitation of low-paid data labelers who annotate training sets, the unauthorized scraping of artists’ and writers’ work to train models, the massive energy consumption of AI data centers, and the proliferation of AI-generated content that undermines trust in digital information. Osborne also challenged the scientific credibility of runaway intelligence scenarios, calling them “a religious eschatology dressed up in scientific language.” He argued that such predictions ignore fundamental physical constraints like energy limits and thermodynamics, which make uncontrolled AI growth impossible, not just unlikely. Despite the EU’s ongoing implementation of the AI Act—a comprehensive regulatory framework set to take effect through 2026—Osborne noted that the U.S. is moving in the opposite direction, with federal efforts aimed at restricting state-level oversight and keeping national regulations as light as possible. Rather than chasing speculative threats, Osborne urged regulators to apply existing legal principles—such as product liability and duty of care—to AI systems. This would compel companies to be accountable for real-world consequences, from misinformation to labor exploitation. He emphasized that he is not against AI itself. In fact, he acknowledged the meaningful benefits of large language models, especially for people with disabilities who rely on them for communication. But he warned that without clear accountability, those benefits could be outweighed by systemic inequities and unchecked corporate power. “The real problems,” he wrote, “are the very ordinary, very human problems of power, accountability, and who gets to decide how these systems are built and deployed.”
