AI Doomers Sound Alarm as LLMs Hit Limits, Pushing for New Paths to AGI
The AI industry is facing a growing reckoning as doubts mount over the long-term potential of large language models (LLMs) to achieve artificial general intelligence (AGI). Once hailed as the inevitable path to human-like intelligence, LLMs are now under intense scrutiny. Critics, once dismissed as pessimists, are gaining traction after the underwhelming release of OpenAI’s GPT-5, which failed to deliver the transformative leap many had anticipated. At the center of this shift is Gary Marcus, a prominent AI researcher and author, who has long argued that simply scaling up data and computing power won’t lead to true AGI. In a recent blog post, he declared that “pure scaling” is no longer a viable strategy, calling the idea of AGI by 2027 a marketing myth rather than a scientific reality. His views are echoed by others who see LLMs as fundamentally limited by their reliance on pattern recognition, not genuine understanding. The financial stakes are high. OpenAI, now the world’s most valuable startup, has raised $60 billion and could soon surpass a $500 billion valuation. Yet it remains unprofitable, and its mission to develop safe, beneficial AGI appears increasingly distant. Other tech giants like Google, Meta, xAI, and Anthropic are pouring billions into scaling their own LLMs, fueling a wave of optimism that some now fear is unsustainable. The recent $1 trillion tech sell-off underscores growing investor anxiety, even as Federal Reserve Chair Jerome Powell signaled possible rate cuts, helping markets rebound. A key concern is the mismatch between hype and performance. Apple researchers published a paper in June titled “The Illusion of Thinking,” revealing that advanced reasoning models struggle with complex tasks and often fail to maintain logical consistency. They concluded that current scaling approaches are unlikely to produce true general intelligence. Though mocked online for Apple’s perceived lag in AI, the findings resonated with skeptics. Andrew Gelman, a Columbia University professor, compared LLMs to jogging—effortless but not the same as running. Geoffrey Hinton, a pioneer of deep learning, still believes language models can achieve understanding through prediction, but even he acknowledges the limitations. Other problems include hallucinations, misinformation, and poor reasoning. A German study found that LLMs hallucinate between 7% and 12% of the time across languages. While companies like OpenAI believe more data can fix these issues, researchers are increasingly questioning whether LLMs have hit a plateau. Yann LeCun, Meta’s chief AI scientist, argues that “most interesting problems scale extremely badly” and that more compute doesn’t guarantee smarter AI. The data bottleneck is another hurdle. High-quality training data is scarce, leading companies to push legal and ethical boundaries. Meta reportedly considered buying Simon & Schuster; Anthropic faced a court ruling that its use of pirated books for training wasn’t fair use. Some researchers, like Stanford’s Fei-Fei Li, argue that language is not the foundation of intelligence. “Humans build civilization beyond language,” she said. LeCun agrees, emphasizing the need for AI that understands the physical world, has common sense, memory, and the ability to plan. In response, a new wave of research is exploring alternatives to LLMs. World models simulate the real world and learn from experience, mimicking how humans understand their environment. Google’s DeepMind released Genie 3, a model capable of simulating complex physical scenarios like volcanic landscapes and underwater terrain. These models can train AI agents in virtual environments before deploying them in the real world. Other approaches include neuroscience-inspired models, multi-agent systems where AIs interact like humans, and embodied AI, where robots learn through physical interaction. Marcus now sees world models—what he calls “cognitive models”—as the real path forward. “LLMs far exceed humans in some ways, but they’re no match for an ant without robust world understanding,” he wrote. As the AI boom faces its first major headwinds, the debate is no longer just about technology—it’s about the future of intelligence itself.