HyperAIHyperAI

Command Palette

Search for a command to run...

OpenAI Co-Founder Ilya Sutskever Says AI Progress Needs More Research, Not Just More Compute

Ilya Sutskever, co-founder of OpenAI and a leading figure in the development of modern artificial intelligence, has declared that the era of relying solely on scaling compute and data is coming to an end. Speaking on an episode of the "Dwarkesh Podcast" published Tuesday, Sutskever argued that the AI industry must return to a period of deep, fundamental research—what he described as "back to the age of research again." Sutskever made these remarks shortly after providing testimony in a deposition as part of Elon Musk’s lawsuit against OpenAI and CEO Sam Altman. In the interview, he challenged the prevailing belief that simply increasing computational power and training data will continue to drive breakthroughs in AI. For the past several years, tech companies have invested hundreds of billions of dollars into acquiring GPUs and expanding data center capacity, betting that larger models trained on vast datasets would inevitably produce smarter, more capable AI systems. This approach—scaling up—has delivered impressive results and offered companies a low-risk path to progress, as it follows a clear, measurable formula. However, Sutskever contends that this strategy is nearing its limits. He pointed out that data is finite and that organizations already possess access to enormous amounts of compute. “Is the belief really: 'Oh, it's so big, but if you had 100x more, everything would be so different?' It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don't think that's true,” he said. Instead, he emphasized that innovation must now come from smarter, more thoughtful research—especially in areas where current AI models fall far short of human capabilities. One of the most pressing challenges, he noted, is generalization: the ability of models to learn from very few examples, much like humans do. “The thing, which I think is the most fundamental, is that these models somehow just generalize dramatically worse than people,” Sutskever said. “It's super obvious. That seems like a very fundamental thing.” While he acknowledged that compute remains essential—particularly as a tool for testing new ideas—Sutskever stressed that the real breakthroughs will come not from throwing more resources at existing methods, but from rethinking how models learn, reason, and understand the world. He now leads Safe Superintelligence Inc., a company focused on long-term AI safety and research, reflecting his shift toward foundational work. In Sutskever’s view, the current moment marks a turning point: the age of scaling is over, and the future of AI depends on returning to the lab, asking deeper questions, and building models that can truly understand and adapt—just like humans.

Related Links