AI Hype vs. Science: Why Language Models Won’t Create True Human-Like Intelligence
The claim that we are on the verge of creating superintelligence, as proclaimed by tech leaders like Mark Zuckerberg, Dario Amodei, and Sam Altman, rests on a fundamental misunderstanding of how human intelligence works. These executives suggest that with enough data, computing power, and model scaling, artificial general intelligence (AGI) and even superintelligence are imminent—possibly by 2026. But this vision is built on a flawed premise: that language is the core of thought, and that increasingly sophisticated language models are the path to true intelligence. In reality, the AI systems we have today—OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Meta’s latest offerings—are all large language models (LLMs). At their core, they are statistical engines that predict the next word or token in a sequence based on patterns learned from vast amounts of text. They do not think, reason, or understand. They simulate language, not cognition. The scientific consensus, supported by decades of research in neuroscience and cognitive science, is clear: human thought is not dependent on language. Studies using fMRI show that different brain networks activate during reasoning, problem-solving, and social understanding—processes that are distinct from language centers. People with severe language impairments due to brain injury often retain full capacity for logic, math, planning, and understanding others’ intentions. This proves that language is a tool for communication, not the foundation of thought. Consider infants and young children. Before they learn to speak, babies actively explore their environment, experiment with objects, imitate behaviors, and form intuitive theories about physics, biology, and psychology. They think long before they can talk. As cognitive scientist Alison Gopnik has shown, children learn like scientists—through hypothesis testing and pattern recognition—well before acquiring language. Language evolved not as a mechanism for thinking, but as a cultural tool for sharing thoughts efficiently across individuals and generations. As researchers Evelina Fedorenko, Steven Piantadosi, and Edward Gibson argue in a 2023 Nature commentary, language is an “efficient communication code” that allows humans to transmit knowledge with high fidelity. It amplifies our intelligence, but it does not create it. This distinction is critical. Remove language from a human, and thought persists. Remove language from a large language model, and nothing remains. LLMs have no internal understanding, no memory of experiences, no ability to reason beyond statistical associations. They cannot form new concepts or question existing assumptions because they lack the capacity for dissatisfaction with the status quo—something essential to scientific breakthroughs. Historically, scientific revolutions—like Einstein’s theory of relativity—did not emerge from data accumulation alone. They arose from bold new ideas that challenged existing frameworks. These leaps come not from prediction, but from imagination, dissatisfaction, and metaphor-making. As philosopher Richard Rorty noted, common sense is made up of “dead metaphors”—once revolutionary ideas that became accepted truths. AI systems trained on existing data can only remix and recombine what already exists. They cannot generate truly novel paradigms because they have no reason to reject the current framework. They are trapped in the vocabulary of the past, unable to step outside it. Even if an AI could mimic human-level performance across many cognitive tasks, it would still be a sophisticated mimic, not a creator. Some experts, including Yann LeCun and Yoshua Bengio, recognize these limits and are exploring alternatives—such as world models that simulate physical reality, plan actions, and learn from experience. But even these efforts lack a clear roadmap. Defining general intelligence as a sum of distinct abilities is a step forward, but it doesn’t solve the deeper problem: how to build systems capable of genuine insight, curiosity, and creative disruption. Until we move beyond the illusion that language equals intelligence, we will continue to mistake statistical fluency for understanding. The promise of superintelligence remains speculative—not because we lack data or chips, but because we misunderstand the nature of thought itself. True intelligence is not just about predicting words. It’s about questioning the world, imagining what isn’t yet known, and daring to be wrong. No current AI can do that. And until it can, the idea of superintelligence remains a myth built on a linguistic mistake.
