Nvidia’s Jensen Huang Champions U.S. Tech Revival with AI, 6G, and Quantum Breakthroughs at GTC Washington
At Nvidia’s GTC Washington DC event, CEO Jensen Huang delivered a powerful vision for American technological leadership, dressed in his signature black ensemble but speaking with a distinctly red, white, and blue message. He framed Nvidia’s latest innovations as part of a national mission to reclaim U.S. dominance in the global AI and advanced computing race. Huang emphasized the need to bring technology development and manufacturing back to the United States, particularly in critical infrastructure like telecommunications. He highlighted a strategic partnership with Nokia, a Finnish company, through which Nvidia is investing $1 billion. This collaboration will integrate Nvidia’s AI-RAN (Radio Access Network) products into Nokia’s portfolio, enabling telecom providers to build AI-native 5G and future 6G networks using American-designed technology. Huang stressed that decades of reliance on foreign-built wireless infrastructure must end, and that the U.S. now has the opportunity to lead the next generation of global communication standards. To support this, Nvidia unveiled the Aerial RAN Computer Pro (ARC-Pro), a new accelerated computing platform designed for 6G. Built on the Grace CPU, Blackwell GPU, and Mellanox ConnectX networking, ARC-Pro runs the CUDA-X Aerial library—essentially a software-defined, programmable computer capable of both wireless communication and real-time AI processing. This allows telecom operators to upgrade their networks through software, creating a seamless path from 5G-Advanced to 6G. Huang also turned his attention to quantum computing, introducing NVQLink, a new interconnect that directly links quantum processors with Nvidia’s GPUs. Quantum error correction—essential for stable, scalable quantum systems—requires rapid data movement between classical and quantum hardware. NVQLink enables terabytes of data to be transferred thousands of times per second, making real-time error detection and correction feasible. This is powered by CUDA-Q, Nvidia’s quantum-classical computing platform, which will allow researchers to orchestrate quantum devices and supercomputers together, creating a unified, accelerated quantum computing environment. Seventeen quantum companies and eight U.S. national laboratories—including Brookhaven, Fermilab, and Oak Ridge—will use NVQLink, underscoring its role in advancing U.S. scientific leadership. Nvidia is also building seven AI supercomputers for the U.S. Department of Energy, including Solstice, the largest AI system ever built for the DOE, powered by 100,000 Blackwell GPUs. Equinox, another system with 10,000 Blackwells, will join it at Argonne National Laboratory, delivering a combined 2,200 exaflops of performance. Huang credited the Trump administration and Energy Secretary Chris Wright for enabling these massive projects through supportive energy policies. The company is also working with HPE on next-generation systems. The GX5000 architecture will power “Discovery,” the successor to the Frontier supercomputer at Oak Ridge. Meanwhile, two new systems—“Mission” and “Vision”—will be deployed at Los Alamos National Lab, featuring Nvidia’s upcoming Vera Arm CPUs and Rubin GPUs in highly optimized superchips. Mission will manage the U.S. nuclear stockpile and go live in 2027; Vision will advance AI research for national security. Huang described the Rubin GPU as a product of extreme co-design—where every layer of hardware and software is engineered together for peak performance. He noted that Rubin is on track for production by next year, delivering 100 petaflops of computing power. Demand for Nvidia’s AI chips continues to surge. While Hopper GPUs sold four million units, Blackwell and Rubin orders now total 20 million GPUs—each Blackwell chip contains two GPUs in one package. Huang revealed that Nvidia already has $500 billion in projected revenue from Blackwell and early Rubin sales through 2026, with 6 million Blackwell GPUs shipped in the first half of 2025. This represents a fivefold increase in growth compared to Hopper, driven largely by hyperscalers and cloud providers. Despite the massive scale, pricing per chiplet remains stable, reflecting volume-driven discounts. However, as enterprises increasingly adopt generative AI, revenue per chiplet—and profitability—are expected to rise. Finally, Huang noted that the chart showing demand doesn’t include eight-way NVL8 servers or custom architectures used in U.S. and European HPC centers, suggesting actual demand may be even higher.
