Meta Unveils New AI Infrastructure Initiative, Says Mark Zuckerberg
Meta has taken a major step toward solidifying its position in the generative AI race by launching Meta Compute, a new initiative aimed at dramatically expanding the company’s AI infrastructure. The announcement, made by CEO Mark Zuckerberg on Threads, underscores Meta’s commitment to building a massive, energy-intensive computing foundation to support its AI ambitions. Zuckerberg revealed that Meta plans to construct tens of gigawatts of computing capacity this decade, with the potential to scale to hundreds of gigawatts or more in the long term. A gigawatt equals a billion watts, and such a scale reflects the enormous power demands of training and running large language models and other AI systems. This move follows Meta’s earlier capital expenditure projections, where CFO Susan Li emphasized that investing in AI infrastructure would be a core strategic advantage for developing superior AI models and user experiences. The company’s new initiative signals a shift from incremental upgrades to a full-scale infrastructure buildout, positioning Meta to compete directly with rivals like Microsoft, Google, and Amazon in the race for AI dominance. Three key executives will lead the effort. Santosh Janardhan, Meta’s head of global infrastructure since 2009, will oversee technical architecture, software development, silicon programs, developer productivity, and the operation of Meta’s global data center fleet and network. His deep institutional knowledge and long tenure make him central to executing the technical backbone of the initiative. Daniel Gross, who joined Meta last year, brings a high-profile AI pedigree. Co-founder of Safe Superintelligence alongside former OpenAI chief scientist Ilya Sutskever, Gross will lead a new strategic group focused on long-term capacity planning, supplier partnerships, industry analysis, and business modeling. His role reflects Meta’s emphasis on forward-looking planning and securing reliable supply chains for AI hardware and energy. Dina Powell McCormick, Meta’s newly appointed president and vice chairman, will focus on external engagement. Her responsibilities include working with governments and policymakers to facilitate the permitting, financing, and deployment of Meta’s infrastructure projects. Given the regulatory and logistical hurdles involved in building large-scale data centers and power grids, Powell McCormick’s role is critical in navigating the complex landscape of public-private partnerships and infrastructure investment. The push for AI-ready infrastructure is part of a broader industry trend. Microsoft has been aggressively partnering with chipmakers and infrastructure providers, while Alphabet acquired Intersect, a major data center firm, in December to bolster its AI capabilities. As AI models grow more complex and data-intensive, the demand for computing power is expected to surge—some estimates suggest U.S. electricity consumption could rise from 5 gigawatts to 50 gigawatts within the next decade. Meta Compute represents not just a technological ambition but a strategic bet on infrastructure as a competitive moat. By controlling its own computing stack—from silicon and software to energy sourcing and policy engagement—Meta aims to accelerate innovation while maintaining control over costs and scalability. While Meta has not yet provided detailed timelines or locations for its infrastructure projects, the initiative marks a pivotal moment in the company’s evolution from a social media platform to a foundational AI infrastructure provider. With the right execution, Meta Compute could become a cornerstone of the next generation of AI development, reshaping how AI systems are built, deployed, and powered globally.
