NVIDIA Unveils Spectrum-XGS Ethernet to Connect Distributed Data Centers into Giga-Scale AI Super-Factories
NVIDIA has unveiled NVIDIA Spectrum-XGS Ethernet, a groundbreaking networking technology designed to connect distributed data centers into unified, giga-scale AI super-factories. The announcement was made at the Hot Chips conference, highlighting a pivotal advancement in infrastructure for the rapidly expanding AI industry. As demand for AI computing grows, single data centers are hitting physical and power limits, making it increasingly difficult to scale within one facility. Traditional Ethernet solutions struggle with high latency, jitter, and inconsistent performance when connecting distant sites—barriers that hinder the seamless operation of large-scale AI workloads. Spectrum-XGS Ethernet addresses these challenges by introducing a new paradigm: scale-across. This capability complements NVIDIA’s existing scale-up and scale-out strategies, enabling the creation of massive, geographically dispersed AI super-factories that span cities, countries, and continents. These unified systems can deliver giga-scale intelligence by leveraging the combined power of multiple data centers as a single, optimized compute environment. Jensen Huang, founder and CEO of NVIDIA, emphasized the significance of the development: “The AI industrial revolution is here, and giant-scale AI factories are the essential infrastructure. With NVIDIA Spectrum-XGS Ethernet, we add scale-across to scale-up and scale-out, linking data centers across the globe into vast, giga-scale AI super-factories.” Built as an extension of the NVIDIA Spectrum-X Ethernet platform, Spectrum-XGS integrates intelligent algorithms that dynamically adapt network performance based on the distance between data centers. Features like auto-adjusted distance congestion control, precision latency management, and comprehensive end-to-end telemetry ensure consistent, high-performance communication across long distances. The result is nearly double the performance of the NVIDIA Collective Communications Library, significantly accelerating multi-GPU and multi-node training and inference tasks. The technology enables multiple data centers to function as one cohesive AI supercomputer, delivering predictable, low-latency performance regardless of geographic separation. This is critical for enterprises building the next generation of AI systems. CoreWeave, a leading hyperscaler in AI infrastructure, is among the first to adopt Spectrum-XGS Ethernet. Peter Salanki, co-founder and CTO of CoreWeave, said, “Our mission is to deliver the most powerful AI infrastructure to innovators everywhere. With NVIDIA Spectrum-XGS, we can connect our data centers into a single, unified supercomputer, giving our customers access to giga-scale AI that will accelerate breakthroughs across every industry.” The Spectrum-X platform, which includes NVIDIA Spectrum-X switches and NVIDIA ConnectX-8 SuperNICs, offers 1.6 times greater bandwidth density than standard Ethernet solutions, making it ideal for multi-tenant, hyperscale AI environments. It powers some of the world’s largest AI supercomputers, combining ultra-low latency, seamless scalability, and high efficiency. This announcement follows a series of recent networking innovations from NVIDIA, including the Spectrum-X and Quantum-X silicon photonics switches, which enable millions of GPUs to be interconnected across sites while reducing energy use and operational costs. Spectrum-XGS Ethernet is now available as part of the NVIDIA Spectrum-X Ethernet platform. For more information, visit the Hot Chips event page.