DeepMind's Hassabis Champions Maximum AI Scaling for AGI, Sparking Debate on Limits and Alternatives
Google DeepMind CEO Demis Hassabis has declared that the current trajectory of AI scaling must be pushed to its absolute limit, arguing it is essential for achieving artificial general intelligence (AGI)—a form of AI capable of reasoning and learning across a wide range of tasks like a human. Speaking at the Axios AI+ Summit in San Francisco, Hassabis emphasized that scaling, which involves increasing the size of models, data, and computational power, is not just helpful but likely fundamental to building AGI. "The scaling of the current systems, we must push that to the maximum, because at the minimum, it will be a key component of the final AGI system," he said. "It could be the entirety of the AGI system." Hassabis' comments come on the heels of the release of Gemini 3, a new model from DeepMind that has received widespread attention for its performance. His stance reflects a core belief in the AI community: that the more data and compute a model has, the more capable it becomes, a principle known as AI scaling laws. While he acknowledges that scaling alone may not be sufficient, he believes it will likely be a major part of the path to AGI, with perhaps one or two additional breakthroughs needed to complete the picture. However, the idea of infinite scaling faces growing challenges. Publicly available data is finite, and the cost of building and running massive data centers to support ever-larger models is rising. The environmental impact of such infrastructure is also a growing concern. Some experts warn that the industry is approaching diminishing returns, where each new investment in compute and data yields smaller improvements in performance. Yann LeCun, Meta’s former chief AI scientist and a leading figure in the field, has voiced skepticism about the long-term viability of scaling as the primary path to AGI. At a recent event in Singapore, he noted that "most interesting problems scale extremely badly," meaning that simply adding more data and compute does not guarantee smarter AI. LeCun is now leaving Meta to launch his own startup focused on developing "world models"—a new approach to AI that aims to understand the physical world through spatial and causal reasoning, rather than relying solely on language data. His vision includes systems with persistent memory, the ability to reason, and the capacity to plan complex actions. This growing debate highlights a pivotal moment in AI development. While companies like Google DeepMind continue to bet heavily on scaling, others are exploring alternative paradigms. The race to AGI is no longer just about size and speed—it’s about rethinking the very foundations of how machines learn and understand.
