HyperAIHyperAI

Command Palette

Search for a command to run...

Peking University researchers develop high-precision RRAM-based analog computer that rapidly solves matrix equations with 24-bit accuracy, enabling scalable, energy-efficient computing for AI and communications.

Researchers at Peking University and the Beijing Advanced Innovation Center for Integrated Circuits have developed a high-precision analog computing system capable of rapidly solving matrix equations using resistive random-access memory (RRAM) technology. The breakthrough, detailed in a paper published in Nature Electronics, marks a significant advancement in analog computing, overcoming long-standing challenges related to noise and precision that have historically limited the practical use of such systems. Unlike digital computers that process information using discrete binary values (0s and 1s), analog computers manipulate continuous physical quantities—such as electrical current—to represent and compute mathematical variables. While this approach can offer speed and energy efficiency, traditional analog systems have struggled with accuracy due to sensitivity to noise and signal drift. The new system, led by Zhong Sun, assistant professor at Peking University, represents what Sun calls "modern analog computing." Instead of focusing on differential equations as in classical analog computing, this approach targets matrix equations—core operations in machine learning, signal processing, and scientific computing—using arrays of non-volatile RRAM devices. Sun and his team have been exploring analog computing since 2017. Earlier versions of their systems, while fast, lacked the precision needed for real-world applications. Starting around 2022, they focused on closing the gap with digital systems. In their latest work, they achieved 24-bit fixed-point precision—comparable to the widely used FP32 floating-point standard—by combining two key components. The first is a low-precision matrix inversion circuit originally developed in 2019 during Sun’s postdoctoral research at Politecnico di Milano. This circuit can solve equations of the form Ax = b in a single step but with limited accuracy. To improve precision, the researchers integrated it with a high-precision matrix-vector multiplication technique using bit slicing across multiple RRAM arrays. This hybrid method enables iterative refinement: the low-precision solver generates an initial approximation, and the high-precision component calculates correction values—both direction and magnitude—leading to rapid convergence. The process outperforms conventional gradient-descent algorithms in speed and efficiency. The team demonstrated scalability by building an 8x8 RRAM array and successfully solving 16x16 matrix equations. They then extended the system to handle larger problems, including 32x32 matrices, showing the potential for further scaling. Sun emphasized that the most significant achievement is proving that fully analog computing can match the precision of digital systems while maintaining advantages in speed and energy efficiency. The next steps involve scaling up the circuit size, integrating all components onto a single chip, and creating a unified platform that combines matrix inversion and multiplication capabilities. The technology holds promise for applications in wireless communications, artificial intelligence, and scientific computing, where fast and efficient matrix operations are essential. This work paves the way for a new generation of high-performance, low-power computing systems that could complement or even challenge traditional digital architectures.

Related Links