Neural Cellular Automata Enable Self-Assembling Digital Life with Regeneration and Problem-Solving Powers
Alexander Mordvintsev, a research scientist at Google Research in Zurich, has pioneered a groundbreaking approach to self-assembly in computational systems by reversing the logic of the classic Game of Life. Instead of starting with simple rules and observing complex outcomes, Mordvintsev’s neural cellular automata (NCAs) begin with a desired pattern and automatically discover the rules that generate it. This innovation enables what he calls “complexity engineering”—designing basic units so they can self-organize into intricate forms without centralized control. Mordvintsev’s system uses a neural network to determine how each cell in a grid evolves based on its own state and that of its neighbors. Unlike traditional cellular automata, where rules are predefined, NCAs learn these rules through training. The process starts with a single live cell and iteratively evolves the pattern over many steps, adjusting the network’s parameters until the output matches the target. This is achieved using either backpropagation or genetic algorithms, with the former being faster but requiring modifications to accommodate smooth, continuous cell states rather than binary ones. To make the system work, Mordvintsev introduced several key changes: cells have continuous values between 0 and 1, hidden variables guide development, and updates occur at random intervals rather than simultaneously. These adjustments create more organic, lifelike behavior. He also used a relatively large neural network—8,000 parameters—to give the system enough capacity to learn complex patterns, even though simpler rule sets could theoretically do the job. One of the most striking features of NCAs is their ability to regenerate. When a pattern is damaged, the system often spontaneously repairs itself. In one case, a butterfly with a missing wing regrew it after a kind of backflip. Mordvintsev found that regeneration could emerge naturally or be explicitly trained. Systems designed to withstand damage sometimes developed redundancy, such as multiple eyes, to ensure survival. Researchers like Sebastian Risi and Ben Hartl suggest this robustness arises from the system’s exposure to noise and unpredictability during training, forcing it to adapt. This self-repairing capability has inspired biologists and engineers alike. Michael Levin of Tufts University sees NCAs as a powerful model for morphogenesis—the process by which cells organize into complex organisms. Unlike traditional AI models, NCAs don’t rely on a central blueprint. Instead, they mimic how real biological systems self-organize through local interactions. Beyond biology, NCAs offer a new paradigm for computation. Because they operate with only local, neighbor-to-neighbor communication, they could lead to vastly more energy-efficient computers. Unlike von Neumann-style machines or neural networks, they lack long-range connections, reducing power consumption. Mordvintsev and others have demonstrated that NCAs can perform tasks like recognizing handwritten digits, solving matrix multiplication, and tackling abstract reasoning problems from the Abstraction and Reasoning Corpus—often by learning processes rather than memorizing patterns. In robotics, NCAs are being explored to program swarms of robots that behave like unified organisms. Experiments have shown virtual robot chains can learn to swim like tadpoles by adapting their shape through self-organization. Researchers like Josh Bongard envision future robots that constantly reconfigure, much like multicellular life. Mordvintsev sees this work as a return to the roots of computing—where ideas from biology, self-organization, and computation were once deeply intertwined. Today, his NCAs are helping to reunite those fields, offering a new way to think about design, intelligence, and life itself.
