HyperAI
Back to Headlines

AMD Predicts Zettascale Supercomputers Will Require Half a Gigawatt of Power by 2035

11 days ago

At the International Supercomputing Conference (ISC) in 2025, AMD highlighted the significant challenges facing the development of AI accelerators, particularly the escalating power requirements these advanced chips necessitate. According to a report by ComputerBase, AMD anticipates that zettaFLOP-capable supercomputers, expected to emerge by 2035, will require a staggering amount of energy—approximately half a gigawatt, which is enough to power 375,000 homes. AMD presented a graph projecting the growth of supercomputer power consumption from 2010 to 2035. In the early part of this timeframe, around 2010 to 2015, supercomputers consumed about 3.2 gigaflops per watt. By 2035, the company forecasts that zettascale supercomputers will consume 2,140 gigaflops per watt, or roughly 500 megawatts of power. The graph assumes a 2x improvement in AI processor efficiency every 2.2 years, yet still projects a substantial increase in power demands. The primary drivers of this surge in power consumption are memory bandwidth and cooling capacity. As AI hardware becomes more powerful, these components must also expand to support the increased computational load, leading to a compounding effect on overall energy usage. Furthermore, the demand for various compute precision levels, including FP128, FP64, FP16, and FP8, adds to the complexity. While higher precision formats like FP64 and FP128 offer greater accuracy, many workloads benefit more from lower precision formats like FP16 and FP8. Therefore, future AI accelerators will need to handle a wide range of precision operations effectively. Current trends in power consumption are already alarming. For instance, Nvidia's B200 AI accelerator has a thermal design power (TDP) of 1,000 watts, and AMD's new MI355X boasts a TDP of 1,400 watts. In contrast, Nvidia's A100, the flagship GPU from five years ago, consumed only 400 watts—less than the power needed by a high-end gaming graphics card. To address the looming energy crisis, the U.S. government is considering the use of nuclear power plants to supply the necessary power. Some leading technology companies, like Microsoft, are investing heavily in nuclear fusion research to ensure they can meet their data centers' energy needs sustainably. While today's supercomputers operate in the exaFLOP range, with ElCaptain, based on AMD's MI300A, currently holding the title of the world's fastest supercomputer, the landscape is rapidly evolving. Full-scale AI data center farms are now achieving zettaFLOP performance. Oracle, for example, has launched the first zettascale cloud computing cluster, equipped with 131,072 Blackwell GPUs, providing a total performance of 2.4 zettaFLOPS. These projections and developments underscore the critical need for innovative solutions to manage power consumption and ensure the sustainability of future AI infrastructure. The challenge is not only technological but also environmental, as the energy demands of these supercomputers could have significant implications for global energy resources and climate change. Collaboration between government, industry, and academic researchers will be essential to navigate this complex issue and pave the way for the next generation of AI technology.

Related Links