Meta Expands AI Chip Partnership With Broadcom Through 2029
Meta and Broadcom announced on Tuesday that they will extend their existing partnership through 2029 to continue jointly designing Meta's custom AI accelerators. This collaboration covers critical areas including chip design, packaging, and networking, aiming to support Meta in building large-scale AI computing infrastructure. According to the statement, Meta has committed to an initial deployment of training and inference accelerators at a scale of one gigawatt, with plans to gradually expand to multiple-gigawatt levels based on Broadcom technologies. Broadcom stated that Meta's self-developed MTIA chips would be among the first AI chips manufactured using a two-nanometer process. Mark Zuckerberg, CEO of Meta, noted that this cooperation will help establish the underlying compute foundation required "to deliver personal superintelligence to billions of users." Meanwhile, Broadcom CEO Hock Tan has decided not to seek re-election to Meta's board of directors. Tan had joined Meta as a director starting in 2024, and this change was disclosed in company filings. On business developments, Meta released four updated versions of its MTIA chips earlier this March. The series was originally launched in 2023 primarily for internal AI tasks within the company. With surging demand for AI data centers, cloud giants including Meta are accelerating development of application-specific integrated circuits (ASICs) to reduce reliance on NVIDIA and AMD GPUs. This agreement follows Broadcom reaching a long-term TPU deal with Google, highlighting the growing trend toward customized AI chips. Previously, Meta announced it could invest up to $135 billion by 2026 in AI infrastructure construction, planning to build 31 data centers—27 located in the United States—to accelerate efforts to catch up with industry competitors such as Anthropic and OpenAI.
