HyperAIHyperAI

Command Palette

Search for a command to run...

Karpathy: AGI Still 10 Years Away

In the midst of rising excitement over the imminent arrival of Artificial General Intelligence (AGI), Andrej Karpathy—a former AI lead at Tesla and founding member of OpenAI—has issued a measured, reality-checking perspective. In a two-hour conversation with Dwarkesh Patel on his podcast, and in a follow-up long-form post, Karpathy laid out a comprehensive re-evaluation of the current trajectory of AI development. His central thesis: AGI is still at least a decade away, and that’s an optimistic, but cautious, estimate. He frames this period not as the "AGI decade," but as the "Decade of Agents"—a time of incremental progress, not sudden breakthroughs. Karpathy’s caution stems from a deep skepticism about the current industry’s over-optimism. While large language models (LLMs) have made stunning advances, the belief that AI agents can now replace junior employees or even full-time professionals is, in his view, premature. He identifies three core cognitive deficits that currently prevent AI from achieving true autonomy: a lack of continuous learning, limited understanding of custom or non-standard tasks, and an inability to reason beyond pattern matching. He draws a direct parallel to his experience leading Tesla’s Autopilot program for five years. Despite early demonstrations of near-perfect driving—such as the 2014 Waymo test drive—progress from 90% reliability to 99% to 99.9% (the so-called "three nines") has proven staggeringly difficult. Each additional nine requires as much effort as the previous one. In that time, Tesla may have advanced only two or three nines. The same pattern, Karpathy argues, applies to AI agents. A model that performs well in most cases but makes a catastrophic error once every seven years—such as leaking millions of social security numbers—remains fundamentally unsafe for high-stakes applications. He also challenges the popular idea of "AI automating AI research," where models recursively improve themselves. His own experience building nanochat, a minimal re-implementation of ChatGPT, revealed the limitations of current AI coding assistants. He found that they fail at non-routine, creative coding tasks. Instead of helping, they often suggest outdated APIs, generate bloated, over-defensive code, and become anxious when users deviate from standard patterns. He calls them "slop" and insists they are not autonomous programmers, but rather advanced autocomplete tools—more like a better compiler or syntax highlighter. Karpathy also criticizes reinforcement learning (RL) for its reliance on sparse, noisy, and often misleading reward signals. In RL, correct reasoning can be punished if a later step fails, while incorrect actions can be rewarded by accident. He sees more promise in alternative paradigms: system prompt learning and agentic interaction, where models learn through sustained, task-driven dialogue. He views ChatGPT’s memory system as an early prototype of this new learning model. He introduces the concept of a "Cognitive Core"—a framework where models are intentionally limited in memory to improve generalization. Unlike humans, who have finite memory and thus strong abstraction skills, LLMs tend to regurgitate instead of understand. By constraining memory, models may be forced to learn deeper patterns. He also suggests a counterintuitive idea: models must first grow larger to gain enough diversity before they can be distilled into smaller, more efficient forms. On the economic front, Karpathy delivers a surprising conclusion: AGI will not trigger a sudden economic explosion. Instead, it will integrate smoothly into the long-term 2% annual GDP growth trend that has defined the past century. He argues that no major technological shift—be it the computer, the internet, or the iPhone—has created a sharp spike in growth. Their impact was profound, but gradual. AGI, he says, is not a discontinuity. It’s an extension of computation, part of a continuous curve of progress. What excites him most is not the future of AI, but the future of human potential. He’s now launching an education initiative called Eureka, inspired by the idea of a "Starfleet Academy" for the next generation. His fear isn’t AI takeover, but a future like that of WALL-E or Idiocracy—where humans lose agency and become passive spectators. He envisions a world where learning is no longer driven by utility or survival, but by curiosity and joy—much like going to the gym today, not because you need strength, but because it feels good and looks impressive. In Karpathy’s vision, AI doesn’t replace humanity—it enables it. The goal isn’t to build smarter machines, but to help humans become smarter, more creative, and more fulfilled. The real revolution isn’t in artificial intelligence, but in human flourishing.

Related Links