HyperAIHyperAI

Command Palette

Search for a command to run...

Dropping the AGI Fantasy: Why Silicon Valley’s Obsession with Artificial General Intelligence Hinders Practical, Ethical Engineering

The belief in Artificial General Intelligence, or AGI, has become a dominant narrative in Silicon Valley, shaping the ambitions and actions of leading AI companies like OpenAI. Yet this vision, rooted more in science fiction than engineering reality, is obstructing practical progress. Karen Hao’s book Empire of AI reveals how deeply entrenched this fantasy is—among OpenAI’s founders and leaders, it’s not just a goal but a near-religious conviction. Elon Musk saw Demis Hassabis of DeepMind as a supervillain bent on world domination through AI, even referencing Hassabis’s old video game Evil Genius as proof of his sinister intent. Ilya Sutskever, OpenAI’s chief scientist, once lit a wooden effigy of a supposed “aligned” AGI at a company retreat, symbolically destroying it to emphasize the danger of a misaligned superintelligence. These are not metaphors; they are rituals of a culture that treats AGI as an imminent, existential force. This belief in AGI is not just philosophical—it drives real-world decisions. The success of GPT-2, which seemed to validate the “pure language” hypothesis—that AGI can emerge from language alone—fueled a massive scaling push. The logic became: more data, more parameters, more compute. This has led to the construction of data centers that guzzle hundreds of liters of water per second, rely on polluting gas generators because the power grid can’t keep up, and consume energy on par with entire cities. The environmental cost is real and immediate: increased CO2 emissions from hardware manufacturing and operation, and the exploitation of data workers who are subjected to traumatic labor to filter out harmful content like hate speech, self-harm prompts, or child sexual abuse material. The justification for this massive resource drain is often based on expected value (EV) arguments: even if the chance of AGI is tiny—say 0.001%—the potential upside is so enormous that the expected value justifies the cost. But this reasoning is fundamentally flawed. The probabilities and values involved are entirely speculative, untestable, and unfalsifiable. Meanwhile, the negative externalities—environmental degradation, worker harm, energy waste—are real, measurable, and certain. They are not hypothetical risks; they are current harms borne by communities and ecosystems. As a technologist, my goal is to solve problems effectively, efficiently, and without harm. LLMs as AGI fail on all three counts. They are computationally wasteful, built on exploitative labor, and environmentally destructive. The AGI fantasy blinds us to better alternatives. If we let go of the myth of AGI, we can treat generative models as tools—not universal solutions—but as part of a broader engineering toolkit. We can design smaller, purpose-built models, or even use discriminative models where generation isn’t needed. We can make trade-offs based on real costs and benefits, not speculative futures. Dropping the AGI fantasy isn’t surrendering ambition—it’s reclaiming engineering. It’s about building systems that work, that are sustainable, and that respect people and the planet. The real progress in AI won’t come from chasing a myth, but from solving real problems with real care.

Related Links