Reflection AI Raises $2B to Build Open Frontier AI Model, Challenging DeepSeek and Closed Labs
Reflection AI, a startup founded in March 2024 by two former Google DeepMind researchers, has raised $2 billion in funding at an $8 billion valuation, marking a 15x increase from its $545 million valuation just seven months prior. The company is positioning itself as a bold new force in American open-source AI, aiming to become a Western counterpart to China’s rising AI labs like DeepSeek, Qwen, and Kimi. The founders, Misha Laskin—who led reward modeling for DeepMind’s Gemini—and Ioannis Antonoglou, a co-creator of AlphaGo, bring elite credentials in building cutting-edge AI systems. Their vision is to prove that frontier AI can be developed outside the confines of tech giants, leveraging top-tier talent and scalable infrastructure. The new funding round has enabled Reflection AI to assemble a team of top researchers and engineers from DeepMind, OpenAI, and other leading AI labs. The company has built a powerful AI training stack capable of training massive Mixture-of-Experts (MoE) models—architectures previously limited to a few large, closed labs. Laskin said the company has already demonstrated success in autonomous coding and is now expanding its capabilities to general agentic reasoning. A key part of Reflection AI’s strategy is openness. While it will release model weights—allowing developers to use, modify, and deploy the models freely—it will keep its training datasets and full infrastructure pipelines proprietary. This approach mirrors Meta’s Llama and Mistral’s open models, where access to core models is open but the full training process remains controlled. Laskin emphasized that model weights are the most valuable asset for developers, as they enable customization, on-premise deployment, and cost control—critical for enterprises and governments building sovereign AI systems. The company’s target market includes large organizations that want full ownership and flexibility over their AI tools, especially as AI costs continue to rise. The startup’s mission is driven by a sense of urgency. Laskin warned that if the U.S. does not act, the global standard for AI intelligence could be shaped by non-Western labs, creating geopolitical and security concerns. Many nations and enterprises avoid Chinese models due to legal and data sovereignty risks, making a strong American open-source alternative essential. The move has been widely welcomed by the American tech community. David Sacks, the White House AI and Crypto Czar, praised the initiative, noting the growing demand for open, customizable AI. Clem Delangue, CEO of Hugging Face, called it a positive step for open-source AI, though he stressed the importance of rapid sharing of models and data. With around 60 employees focused on AI research, infrastructure, and training, Reflection AI is now securing the compute resources needed to train its first frontier language model—targeted for release early next year. The model will be text-based initially, with multimodal capabilities planned for future versions. Investors in the round include Nvidia, Sequoia, CRV, Lightspeed, GIC, Eric Schmidt, Eric Yuan, Citi, B Capital, 1789, DST, and Disruptive. The funding will support model development, infrastructure scaling, and expanding the team to meet growing demand for open, high-performance AI systems.
