Brain-like AI architecture outperforms training-heavy models, Johns Hopkins study finds
Artificial intelligence systems designed with architectures inspired by the human brain can mimic neural activity in humans and primates—even before being trained on any data, according to new research from Johns Hopkins University. The study, published in Nature Machine Intelligence, challenges the prevailing AI paradigm that relies heavily on massive datasets, extensive training, and enormous computational power. Instead, it highlights the critical role of architectural design in shaping how AI systems process information. Lead author Mick Bonner, assistant professor of cognitive science at Johns Hopkins, emphasized the contrast between current AI development and human learning. “The way that the AI field is moving right now is to throw a bunch of data at the models and build compute resources the size of small cities. That requires spending hundreds of billions of dollars. Meanwhile, humans learn to see using very little data,” Bonner said. “Evolution may have converged on this design for a good reason. Our work suggests that architectural designs that are more brain-like put the AI systems in a very advantageous starting point.” To test how different AI architectures affect brain-like behavior, Bonner and his team examined three widely used network designs: transformers, fully connected networks, and convolutional neural networks. They systematically modified each architecture to create dozens of unique AI models. Without any training, these untrained models were then exposed to images of objects, people, and animals. Their responses were compared to actual brain activity recorded from humans and primates viewing the same images. The results showed that increasing the number of neurons in transformers and fully connected networks had minimal impact on their ability to simulate biological brain patterns. In contrast, modifying convolutional neural networks—especially by adjusting their structural layout—produced activity patterns that closely resembled those seen in real brains. Remarkably, these untrained convolutional networks achieved performance comparable to conventional AI systems that have undergone months of training on millions or billions of images. “This suggests that the architecture itself plays a far more significant role than previously thought,” Bonner explained. “If training on massive data were truly the key, we wouldn’t see such strong brain-like responses in untrained models. Instead, starting with a biologically inspired blueprint could drastically reduce the need for data and energy-intensive training.” The findings point to a promising new direction in AI development: designing systems that are not only more efficient but also more aligned with how the human brain naturally processes information. The researchers are now exploring the development of simple, biologically inspired learning algorithms that could form the foundation of a new deep learning framework—one that learns faster, uses less data, and operates with greater energy efficiency.
