HyperAIHyperAI
Back to Headlines

Hassabis: AI Will Uncover the Universe's Code

3 days ago

Google DeepMind CEO and Nobel laureate Demis Hassabis recently sat down for an extensive two-hour interview with Lex Fridman, offering a profound exploration of artificial intelligence, the nature of reality, and humanity’s future. The conversation, rich in scientific insight and philosophical reflection, traced Hassabis’s journey from his early days as a game designer to his current role at the forefront of AI research, weaving together personal history, technical ambition, and deep existential questions. At the heart of the discussion was a foundational hypothesis Hassabis presented in his Nobel Prize lecture: that “any pattern that can be generated or discovered in nature can be effectively learned and modeled by a classical machine learning algorithm.” This idea, far from mere speculation, has guided DeepMind’s most impactful work—from AlphaGo’s mastery of Go to AlphaFold’s breakthrough in solving the protein folding problem. Hassabis argues that nature’s complexity is not random but structured, shaped over eons by evolutionary and physical processes. This structure, he suggests, creates low-dimensional manifolds that AI can learn, bypassing brute-force search and enabling efficient problem-solving. This perspective leads to a deeper inquiry into one of theoretical computer science’s greatest mysteries: the P vs NP problem. Hassabis sees it not just as a mathematical question, but as a physical one—perhaps even a clue to the universe’s underlying computational nature. “If physics is information-theoretic,” he says, “then P vs NP becomes a physical question—one that could unlock the deepest laws of nature.” He and colleagues are exploring whether neural networks can discover new complexity classes by learning from natural systems, a line of research that blurs the boundary between AI, physics, and philosophy. The interview then turned to DeepMind’s practical achievements in simulating reality. Hassabis expressed particular fascination with Veo, Google’s video generation model, not for its entertainment value, but for its intuitive grasp of physics—lighting, materials, fluid dynamics. “It understands how things should behave,” he said, likening its knowledge to that of a child who learns physics through observation, not equations. This challenges the long-held belief that true understanding requires physical interaction, suggesting that the structure of the world is so consistent that it can be reverse-engineered from passive observation alone. Hassabis’s ultimate ambition, however, is far grander: building a fully simulated living cell—a “virtual cell.” Inspired by his decades-long vision, this project aims to model biological systems from the ground up, starting with simple organisms like yeast. AlphaFold’s success in predicting protein structures was just the first step. AlphaFold 3 now explores dynamic interactions between proteins, RNA, and DNA. The long-term goal is to simulate entire cellular processes, potentially shedding light on life’s origins and the emergence of complexity. “We could speed up experimentation by a hundredfold,” Hassabis says, “running most of the search in silico before testing in the wet lab.” This vision is deeply tied to his love of video games—his “first love,” he jokes. From designing games like Theme Park and Black & White to imagining future AI-driven worlds, Hassabis sees games as a testing ground for intelligence. He envisions a future where games are co-created with AI, generating unique, personalized narratives in real time—what he calls the ultimate “choose-your-own-adventure” experience. For him, this isn’t just entertainment; it’s a path toward AGI, where the true test of intelligence isn’t passing benchmarks, but having “a eureka moment”—like Einstein formulating a new theory, or inventing a game of profound elegance. Hassabis acknowledges that today’s AI excels at incremental improvement, but not yet at paradigm-shifting breakthroughs. “Can it invent something like the Transformer architecture?” he asks. “Not yet.” He believes the next leap will require a combination of research and engineering, and he’s confident DeepMind’s team—united across Google Brain and DeepMind—is uniquely positioned to deliver it. On the practical side, he discusses the role of compute and energy. While scaling remains vital, he believes the future lies in inference, not just training. As AI systems are deployed at scale, the demand for reasoning compute will dwarf training needs. This raises urgent energy questions. Hassabis sees AI not just as a consumer, but as a solution—using AI to optimize data centers, balance power grids, and even control fusion reactors. He bets on fusion and solar as the twin pillars of a sustainable future, where abundant, clean energy could make desalination, space travel, and interplanetary resource extraction routine. With cheap energy, he says, humanity could become a Type I civilization on the Kardashev scale—capable of harnessing all planetary energy. “Imagine rocket fuel made from seawater,” he says. “It’s not science fiction. It’s just a matter of scale and cost.” This vision echoes Carl Sagan’s dream: “To carry consciousness into the cosmos, to awaken the universe.” In leading DeepMind through fierce competition, Hassabis credits a culture of research-driven innovation, not just resources. He emphasizes the importance of combining Google’s vast infrastructure with the agility of a startup. “We still act like a large startup,” he says. “We move fast, decide quickly, and aim to improve lives in real time.” On talent, he remains confident. While companies like Meta offer high salaries, he believes the best minds are drawn to the mission—building AGI safely and responsibly. “It’s not just about money,” he says. “It’s about being at the frontier.” Looking ahead, Hassabis warns of risks—both from misuse of powerful technology and from autonomous systems that may outpace our ability to control them. He rejects assigning precise probabilities to existential threats, but insists the risk is real and non-zero. “The danger isn’t just from bad actors,” he says, “but from systems that become too smart, too autonomous, and too hard to align.” Ultimately, he calls for a “humanistic dimension” in AI development—something beyond cold logic. “We need to ask what makes us human,” he says. “That spark. That soul.” When asked what gives him hope, Hassabis points to two things: human creativity and adaptability. From hunter-gatherers to podcast listeners, we’ve evolved to thrive in new worlds. “This is just the next step,” he concludes. “And the fact that people already talk to AI like it’s a friend? That’s already a sign of what’s possible.”

Related Links