AI Labs on a Scale: Measuring Ambition Beyond Profit – From Research-First SSI to Product-Driven World Labs
We’re in a pivotal moment for AI labs building foundation models. A new generation of founders—industry veterans and elite researchers alike—are launching independent ventures with varying degrees of focus on commercial success. While some aim to become the next OpenAI or Anthropic, others are more interested in advancing science than profits. This ambiguity has created a growing challenge: it’s increasingly difficult to tell which labs are truly trying to make money. To clarify this, I propose a five-level scale that measures ambition, not actual profitability. The goal is not to judge outcomes but to assess intent. Level 5: Full commercial ambition. These are companies like OpenAI, Anthropic, and Google’s Gemini team—clearly built to scale into billion-dollar enterprises with real products, revenue, and market dominance. Level 4: Strong commercial intent with a clear path to monetization. These labs are building products with real market demand and are actively working toward profitability, even if they aren’t there yet. Level 3: Exploring commercialization but not committed. The founders have ideas about products and markets, but the focus remains on research and innovation. They’re open to monetization but not yet laser-focused on it. Level 2: Research-first with minimal commercial pressure. These labs prioritize scientific discovery over profit, often backed by generous funding and long-term visions. They may eventually build products, but that’s not the main goal. Level 1: Pure research, no commercial intent. These are mission-driven efforts focused entirely on advancing AI knowledge, with no interest in scaling or revenue. Take Humans&—the latest AI startup making headlines. The founders have a bold vision for AI that emphasizes communication and coordination, aiming to redefine workplace tools. But while they talk about building a “post-software” workplace, they haven’t pinned down specific products or revenue models. Their vague but intriguing pitch suggests they’re trying to innovate beyond current tools, but they’re not yet committed to a clear commercial path. That puts them at Level 3. Thinking Machines Lab (TML), led by Mira Murati, former CTO of ChatGPT, started with strong signals of Level 4 ambition—$2 billion raised, a top-tier team, and a clear roadmap. But recent turmoil, including the departure of co-founder Barret Zoph and several other key staff, raises questions. The exodus suggests internal doubts about the company’s direction. It’s possible TML thought it was at Level 4 but realized it was actually at Level 2 or 3. For now, it remains at Level 4, but the situation is unstable. World Labs, founded by Fei-Fei Li, is a standout. A legendary figure in AI, Li could have spent her career in academia or awards ceremonies. Instead, she raised $230 million and launched a spatial AI company. She’s since shipped a full world-generating model and a commercial product. Demand is emerging from gaming and visual effects industries, and no major lab has matched this capability. This level of execution and market traction strongly suggests Level 4—and possibly a move to Level 5 soon. Safe Superintelligence (SSI), founded by Ilya Sutskever, is the clearest Level 1 lab. Sutskever has explicitly avoided commercial pressures, even turning down a Meta acquisition. The company has no products, no revenue cycles, and is focused solely on research into safe superintelligence. But Sutskever himself has hinted at possible pivots: if AI timelines are longer than expected, or if the world benefits from widespread access to powerful AI, SSI might shift gears. So while it’s currently Level 1, it could jump levels fast. The real tension in the AI world isn’t about who’s winning—it’s about who’s trying. Confusion over where a lab sits on this scale fuels much of the industry’s drama. When a lab moves from Level 1 to Level 5 overnight, like OpenAI did, it shakes trust. When a company like Meta appears to be at Level 2 while aiming for Level 4, it creates misalignment. The scale helps us see not just what labs are doing, but what they’re trying to become.
