HyperAIHyperAI

Command Palette

Search for a command to run...

Reasoning Models Mimic Human Brain Information Processing

A new generation of large language models (LLMs), known as reasoning models, has dramatically improved at solving complex problems like math and coding, marking a significant leap in AI capabilities. Unlike earlier LLMs that relied solely on pattern recognition and often failed at reasoning tasks, these models now break down problems step by step, mimicking human-like thought processes. Scientists at MIT’s McGovern Institute for Brain Research, led by Professor Evelina Fedorenko, have found that the mental effort required by these models to solve difficult problems closely mirrors the cognitive effort humans expend—suggesting a surprising convergence between artificial and human intelligence, even though it was not intentionally designed. The researchers tested both reasoning models and human volunteers on seven types of problems, including arithmetic, logic puzzles, and the challenging ARC (Abstraction and Reasoning Corpus) task, which involves inferring visual transformations from colored grids. They measured human response times in milliseconds and used “tokens”—internal computational steps generated by the models—as a proxy for mental effort. The results showed a strong correlation: the harder a problem was for humans, the more tokens the model used, and vice versa. Arithmetic was the least demanding for both, while the ARC challenge was the most costly, indicating that both humans and models face similar cognitive bottlenecks when tackling abstract reasoning. This parallel suggests that reasoning models, despite being built for performance rather than human-like thinking, have developed a process that resembles how people think. “People who build these models don’t care if they do it like humans,” Fedorenko notes. “The fact that there’s some convergence is really quite striking.” The models achieve this through reinforcement learning, where they are rewarded for correct answers and penalized for errors, allowing them to explore problem-solving paths and reinforce effective strategies. Importantly, the models’ internal “thoughts” are not necessarily linguistic. While they generate text during reasoning—sometimes with errors or nonsensical phrases—the actual computation likely occurs in an abstract, non-linguistic representation space, much like human thought. This insight challenges the assumption that AI reasoning depends on language and suggests deeper cognitive parallels. However, the models still face limitations. They struggle with problems requiring world knowledge not explicitly present in their training data, and their internal processes remain largely opaque. Researchers are now investigating whether these models use brain-like representations and how they transform information into solutions. The findings represent a milestone in AI research, showing that advanced reasoning models not only perform better but also process information in ways that echo human cognition. While not replicating human intelligence, they demonstrate a remarkable, unintended similarity in the “cost of thinking.” This convergence opens new avenues for understanding both artificial and biological intelligence, and may guide future AI development toward more human-like, efficient, and robust reasoning systems.

Related Links

Reasoning Models Mimic Human Brain Information Processing | Trending Stories | HyperAI