HyperAIHyperAI
Back to Headlines

CGformer: AI Breakthrough Speeds Up Material Discovery

7 days ago

Traditional materials development has long relied on time-consuming and costly trial-and-error experimentation, often taking years or even decades to yield new functional materials. Now, artificial intelligence is transforming this landscape by dramatically accelerating discovery—shifting timelines from years to days. However, most existing AI models in this domain suffer from a fundamental limitation: they are “near-sighted.” To address this critical challenge, a team led by Professor Jinjin Li from the Artificial Intelligence and Microstructure Laboratory (AIMS-Lab) at Shanghai Jiao Tong University has developed a novel AI algorithm called CGformer. This breakthrough redefines how AI models process crystal structure information, significantly improving the accuracy of material property predictions. The research, titled CGformer: Transformer-enhanced crystal graph network with global attention for material property prediction, was recently published in Matter. “Our goal was to solve a core flaw in current AI-driven materials design—its inability to perceive long-range interactions,” Li explained to DeepTech. She likened traditional models like the widely used Crystal Graph Convolutional Neural Network (CGCNN) to someone peering at a massive painting with their face pressed against the canvas—able to see only tiny fragments, not the full picture. In practice, CGCNN-style models operate by allowing each atom to exchange information only with its immediate neighbors. While effective for local interactions, this approach fails to capture long-range, global dependencies that govern many critical material properties—such as ion transport efficiency in batteries. If an AI model lacks a “global view,” its predictions become inherently limited, potentially leading to flawed design directions. CGformer emerged from a clear need and a powerful tool. The team recognized that the “near-sightedness” of crystal graph networks was a key bottleneck. At the same time, they turned to the Transformer architecture—a revolutionary framework that has driven progress in natural language processing through its “global attention” mechanism, capable of modeling long-range dependencies. “We creatively integrated this global attention paradigm into crystal structure modeling, merging it with the physical interpretability of crystal graph representations,” Li said. The result is a “holistic communication network” within the crystal structure, where every atom can directly interact with any other atom in a single step, regardless of distance. This shifts the model from localized “whispers” to a full-scale “broadcast,” enabling it to perceive the entire structural landscape at once. However, integrating Transformer mechanics with crystal physics posed a major challenge. Standard graph representations lack inherent spatial or chemical meaning. To solve this, the team developed a suite of physical encodings: “spatial encoding” to convey real 3D positions and distances between atoms, “centrality encoding” to highlight the importance of each atom in the overall topology, and “edge encoding” to incorporate chemical bond types and lengths. This hybrid design preserves the physical intuition of crystal graphs while equipping the model with unprecedented global awareness. To test CGformer’s capabilities, the team focused on high-entropy materials—complex systems where four or more elements are mixed at the same crystal site, creating high configurational disorder. These materials are challenging due to their structural complexity and limited experimental data, making them ideal testbeds for evaluating AI models under realistic, data-scarce conditions. High-entropy materials also hold great promise for applications like solid-state electrolytes in next-generation batteries. In the case of high-entropy sodium-ion solid-state electrolytes (HE-NSEs), CGformer outperformed CGCNN by reducing the average absolute error (MAE) in property prediction by 25%. Using CGformer, the team screened over 148,000 potential materials and identified 18 with the highest promise. After narrowing down to ~1,000 stable candidates via filtering and clustering, they selected a top-performing group with the lowest predicted ion migration barriers. The team synthesized six of the top-ranked materials predicted by CGformer and validated them experimentally using X-ray diffraction, scanning electron microscopy with energy-dispersive X-ray spectroscopy, and impedance spectroscopy. All materials successfully formed the expected single-phase NASICON structure, with room-temperature sodium-ion conductivity ranging from 0.093 to 0.256 mS/cm—significantly higher than control samples without high-entropy design. “The moment we confirmed the predicted materials worked in the lab, it was incredibly rewarding,” Li said. “Seeing the seamless transition from digital prediction to physical reality is the ultimate validation of our approach.” Beyond discovering new materials, the real impact lies in establishing a scalable, transferable framework for systematic, rapid materials discovery. The platform can serve as a powerful accelerator for next-generation solid-state electrolytes, battery electrodes, and other functional materials. Its adaptability extends to thermoelectrics, photocatalysts, and beyond. Globally, “AI + materials” is emerging as a central driver of technological transformation and industrial innovation. China is rapidly advancing in this field, shifting from application-focused research to foundational algorithmic innovation. Li emphasized that breakthroughs like CGformer reflect China’s growing capacity to tackle core scientific bottlenecks and contribute original solutions to global challenges. While challenges remain—particularly in building comprehensive databases and mature software ecosystems—these gaps are closing fast. With continued investment in fundamental research, China is poised to play an increasingly pivotal role in shaping the future of materials science.

Related Links