HyperAIHyperAI

Command Palette

Search for a command to run...

OpenAI Invests $10 Billion in AI Chips, Partners with Broadcom on Titan XPU

OpenAI is taking a major step toward reducing its reliance on Nvidia by developing its own custom AI chips in partnership with Broadcom, according to reports from the Financial Times and The Information. The collaboration, which is expected to result in the launch of OpenAI’s in-house “Titan” inference chip by the second half of next year, marks a pivotal move in the company’s strategy to gain greater control over its AI infrastructure. The deal is reportedly worth over $10 billion in revenue for Broadcom, with orders beginning to flow in fiscal 2026, not immediately, as some headlines suggested. OpenAI’s push for in-house chips comes amid soaring demand for AI compute and supply constraints tied to Nvidia’s dominance in the market. While Nvidia remains the industry standard for AI training and inference, with companies like Microsoft, Google, Amazon, and Oracle investing heavily in its hardware—including Oracle’s $40 billion order for Blackwell chips—OpenAI is now seeking to build its own dedicated silicon. This shift is driven by the need to reduce costs, improve efficiency, and avoid bottlenecks in scaling its ambitious Stargate Project, a four-year, $500 billion initiative aimed at advancing artificial general intelligence. The Titan chip will be used exclusively within OpenAI’s internal systems to train and run its models, including ChatGPT, rather than being sold to third parties. This mirrors strategies adopted by other tech giants: Google has been deploying its TPU chips, Amazon is developing its own Inferentia chips, and Microsoft has invested in custom silicon for its Azure cloud. OpenAI’s ability to fund such a project is bolstered by its massive valuation—reportedly $500 billion after a $10.3 billion secondary stock sale in September—and a $40 billion Series F funding round in March. Despite its high valuation, OpenAI reported a $5 billion loss in 2024 against $4 billion in revenue. With projected sales of $11.6 billion in 2025, the company remains heavily dependent on capital to sustain infrastructure costs. Relying on public cloud providers or leasing capacity from hyperscalers is becoming increasingly expensive and unsustainable at scale. Designing custom hardware allows OpenAI to better align its software and hardware stack, optimize performance, and lower long-term expenses. Broadcom’s Q3 results reflect the growing demand for AI-focused chips. The company reported $15.95 billion in revenue, up 22% year-on-year, with AI-related chip sales rising 63.4% to $5.18 billion. Of that, AI compute accounted for $3.37 billion, up 56.6%, while AI networking brought in $1.81 billion. Broadcom’s CEO Hock Tan confirmed that the company now has four customers for its custom XPU designs, including Google, Meta, ByteDance, and the unnamed client—widely believed to be OpenAI. However, the $10 billion figure refers not to the chip design fee alone but to total orders for complete AI racks based on Broadcom’s XPUs, which include networking and packaging. This means Broadcom’s direct revenue from the Titan chip will be less than the headline figure suggests. Still, the deal underscores Broadcom’s growing role as a key enabler of the AI hardware boom. While Nvidia remains dominant, OpenAI’s move signals a broader industry trend: the shift from reliance on third-party chips to in-house solutions. As AI scales, control over infrastructure becomes critical. OpenAI’s Titan chip could be a game-changer, not just for the company but for the future of AI development, where hardware-software co-design may become the norm.

Related Links