HyperAIHyperAI

Command Palette

Search for a command to run...

AI for Science startup raises $300M to build lab robots

In March, Liam Fedus announced his departure from OpenAI on Twitter—an announcement that sent shockwaves through Silicon Valley. The post, seemingly simple, triggered an unprecedented wave of interest from venture capitalists eager to back the core member of ChatGPT’s early team and former leader of OpenAI’s critical post-training research group. The reaction was so intense it resembled a bidding war, with some investors even sending what were described as “love letters” expressing their commitment. Within months, Fedus and his co-founder Dogus Cubuk launched Periodic Labs, securing a $300 million seed round led by Felicis, with additional participation from Andreessen Horowitz, DST, NVIDIA’s NVentures, and Accel. High-profile figures such as Jeff Bezos, Elad Gil, Eric Schmidt, and Jeff Dean joined as angel investors. Beyond capital, top-tier talent began flowing in—over twenty researchers from Meta, OpenAI, and Google DeepMind left their lucrative positions, forfeiting millions in equity, to join the new venture. Among them were key contributors to OpenAI’s o1 and o3 models, Eric Toberer, a materials scientist who had already made breakthrough discoveries in superconductivity, and Matt Horton, the lead developer of Microsoft’s generative AI tools for materials science. The list continues to grow, positioning Periodic Labs as one of the biggest beneficiaries of the recent talent war sparked by Meta’s internal shifts. The journey began with a conversation between Fedus and Cubuk seven months prior. Cubuk, once one of Google Brain’s most accomplished machine learning and materials science researchers, had published a landmark paper in 2023 demonstrating a fully automated robotic lab that used a language model to propose chemical recipes and successfully synthesized 41 new compounds. While the tech world buzzed about generative AI’s potential to revolutionize science, Fedus and Cubuk realized the puzzle was finally complete: robotic arms for material synthesis were reliable, machine learning simulations could model complex physical systems with high accuracy, and large language models—particularly those refined by Fedus’s work at OpenAI—had reached unprecedented reasoning capabilities. But more importantly, they recognized that even failed experiments were valuable. In the world of AI for science, data is the lifeblood. Real-world experiments provide a new, high-quality source of training and fine-tuning data—something that could fundamentally disrupt traditional scientific incentives, which reward publication and grant acquisition over the act of exploration itself. “Let AI interact with the real world and bring experiments into the loop—we believe this is the next frontier,” Fedus told reporters. Peter Deng, a former OpenAI researcher who had recently joined Felicis, was the first to respond. After hearing of Fedus’s departure, Deng sent a quick message. They met at a coffee shop in San Francisco’s Noe Valley. Fedus, eager to share his vision, invited Deng to walk uphill through the neighborhood—despite the cold. As they climbed, Deng, wearing a sweater, began to sweat profusely. Then Fedus said something that stopped him in his tracks: “Everyone talks about doing science, but to do science, you have to actually do science.” That moment crystallized a core truth about today’s AI landscape. The internet has been largely exhausted—top models have already been trained on roughly 10 trillion text tokens. But training alone isn’t enough. You can read textbooks endlessly, but you must run experiments. You need a feedback loop between hypothesis and reality—the essence of science. Deng recalled: “The truth about these models is that they only know what’s in the normal distribution. We feed them data, and they just regurgitate it.” True discovery requires testing. On that San Francisco hill, Deng committed to investing. But when he returned to the office, he discovered a problem: the company didn’t even have a name or a legal entity—no contract could be signed. “We were that early,” Deng said. Science is inherently iterative. Existing scientific literature is insufficient—data like formation enthalpies are noisy, and negative results are rarely published. Crucial uncertainties in scientific understanding can only be resolved through experimentation. This is exactly what Periodic Labs aims to solve. They aren’t building models trained on scientific papers or simulating environments. Instead, they’re creating actual AI scientists—autonomous agents operating within physical robotic laboratories. Their goal is to establish a massive, automated lab in Menlo Park, California, where robots will conduct large-scale experiments based on AI-generated hypotheses. These experiments will involve mixing chemical precursors, heating materials, and searching for new superconductors, magnets, or thermal insulators. This creates a powerful feedback loop: AI analyzes literature and simulations, proposes experiments; robots execute them; and the resulting data—successes and failures alike—becomes high-quality, proprietary training data that further improves the AI. This “hypothesize-experiment-learn” cycle aims to compress scientific discovery from years down to months, or even weeks. In this model, nature itself becomes the ultimate reward function. Every experiment provides direct, real-world feedback—something no dataset scraped from the internet can replicate. This exclusive, physically grounded data forms a critical moat, essential for training AI with genuine scientific intuition. Fedus, during a recent visit to Stanford’s Applied Physics Department, ran tests on state-of-the-art AI models analyzing condensed matter physics data. The results were underwhelming—these models performed poorly compared to human researchers. This contradicted the optimistic narratives from AI giants like OpenAI and Meta, who have claimed their models will accelerate breakthroughs in drug discovery, mathematics, and theoretical physics. In August, OpenAI’s Kevin Weil announced the launch of “OpenAI for Science,” a new internal initiative to build “the next great scientific instrument: an AI-driven platform to accelerate discovery.” An OpenAI spokesperson, Laurance Fauconnet, stated: “We believe advanced AI can accelerate scientific discovery, and OpenAI is uniquely positioned to lead this effort.” But Fedus is blunt: “Silicon Valley is being lazy in how it imagines the future of large language models.” In a way, Fedus and Cubuk are reviving a fading tradition—like Bell Labs and IBM Research in their prime—where technology companies treated fundamental physical science as a core mission. It was in those labs that the transistor, the laser, and information theory were born. Over the past decades, however, tech research has increasingly focused on software and internet applications, pushing physical science to the margins. Periodic Labs is already building its lab, processing experimental data, running simulations, and testing predictions. Their initial focus is on discovering new superconducting materials—potentially enabling energy-efficient technologies of the future. The robotic systems are still being trained; full autonomy isn’t yet achieved. “They need some time to learn,” Cubuk said. Of course, the path remains uncertain. Scientific discovery is inherently unpredictable, regardless of AI assistance. While Periodic Labs now boasts a world-class team and substantial funding, no one can guarantee they’ll find a breakthrough or achieve their goals on schedule. But the 20+ researchers who left their high-paying roles at tech giants are confident. They’ve voted with their feet, betting that the true revolution in AI for science won’t come from larger language models, but from the beakers, furnaces, and data streams of a real laboratory.

Related Links