HyperAIHyperAI

Command Palette

Search for a command to run...

Nvidia's Open-Source Initiative Amid Retreat by AI Giants

While major U.S. AI giants are pulling back from open source, Nvidia is doing the opposite—inviting more developers into its ecosystem, but only if they build on Nvidia hardware. On Monday, Nvidia announced the acquisition of SchedMD, a leading developer of Slurm, the dominant open-source workload scheduler in high-performance computing (HPC) and AI training clusters. Alongside this, the company unveiled Nemotron 3, a new family of open-source large language models, claiming it to be the most efficient open model suite to date. The Nemotron 3 lineup includes three variants: Nano (3 billion parameters), Super (100 billion), and Ultra (500 billion), all built using a Mixture of Experts (MoE) architecture. Nvidia reports that the Nano version delivers four times the throughput of its predecessor, Nemotron 2, and can reduce token generation by up to 60% during inference. Its context window has also expanded dramatically to 1 million tokens—seven times larger than before. While only the Nano model is currently available, Super and Ultra are expected to launch in early 2026. The release is highly open: Nvidia has published model weights, nearly 10 trillion tokens of synthetic pretraining data, and full training recipes under the NVIDIA Open Model License. This allows commercial use, derivative model creation, and redistribution. Nvidia does not claim ownership over model outputs. Developers can access the models on GitHub and Hugging Face, and use tools like NeMo Gym and NeMo RL for reinforcement learning and safety evaluation. The acquisition of SchedMD strengthens Nvidia’s software strategy. Slurm is the backbone of over half of the world’s TOP500 supercomputers and is critical for managing complex AI workloads. Nvidia has collaborated with SchedMD for over a decade. The company confirmed that Slurm will continue to be developed as open-source and vendor-neutral. The financial terms were not disclosed, but SchedMD CEO Danny Auble called the acquisition “the ultimate recognition of Slurm’s pivotal role in the most demanding HPC and AI environments.” Meanwhile, other U.S. AI leaders are retreating from open source. Just last week, Bloomberg and multiple outlets reported that Meta is developing a new model codenamed “Avocado,” expected to launch in spring 2026—and it may not be open-sourced. This marks a sharp reversal for a company once vocal about open AI as the path forward. Last year, Zuckerberg declared open source as “the way forward” and criticized OpenAI for becoming increasingly closed. But after the underwhelming performance of Llama 4’s flagship Behemoth model in benchmarks, Meta’s Superintelligence Labs began shifting toward closed models. The new chief AI officer, Alexandr Wang, is a known advocate of closed-source approaches. OpenAI’s open-source efforts have also slowed. In August, it released the GPT-oss series—gpt-oss-120b (117 billion parameters) and gpt-oss-20b (21 billion)—under the Apache 2.0 license. But this was the first open release since GPT-2, five years prior. With intense competition from Google and pressure to maintain its edge, OpenAI is unlikely to prioritize open-source initiatives. In contrast, China’s open-source AI movement is accelerating rapidly. According to a joint report by OpenRouter and a16z, China’s share of global open-source LLM usage jumped from 1.2% at the end of 2024 to nearly 30% today. Models like DeepSeek-V3, Alibaba’s Qwen series, and Moonshot AI’s Kimi K2 are driving this surge. Chinese companies are releasing updates frequently, creating a dense and fast-moving ecosystem—something U.S. firms are struggling to match. Nvidia’s CEO, Jensen Huang, has been candid about this shift. At the GTC conference in October, he stated that China is “far ahead” in open source. He warned that if U.S. companies fully retreat, they may be unprepared for a future where Chinese software dominates global AI infrastructure. So why is Nvidia going all-in on open source when others are closing their doors? The answer lies in its core business: selling chips. Nvidia’s true moat isn’t just its GPUs—it’s the entire software stack built around them, especially CUDA. Since its 2006 launch, CUDA has become the de facto standard in AI, machine learning, and HPC. Over 4 million developers use it, and frameworks like PyTorch and TensorFlow are deeply integrated with it. Nvidia has long understood that open source is a powerful tool to lock in developers. While CUDA itself remains closed-source—drawing criticism from rivals—Nvidia has invested heavily in open ecosystems: contributing to Linux, PyTorch, TensorFlow, Kubernetes, and releasing open-source tools like CV-CUDA and TensorRT. In 2022, it even open-sourced its Linux GPU kernel modules under GPL and MIT licenses. The Nemotron 3 launch is a natural extension of this strategy. As Kari Briski, Nvidia’s VP of Generative AI Software, put it: “When we’re the best development platform, people naturally choose us—our platform, our GPUs, not just for today’s projects, but for tomorrow’s products.” Developers who build AI applications and train agents using Nemotron, NeMo, and Triton will become deeply embedded in the Nvidia ecosystem. Over time, switching to AMD, Intel, or other hardware becomes increasingly costly. Nvidia isn’t competing with OpenAI or Anthropic for model market share. Those companies make money through API subscriptions. Nvidia’s business is selling hardware. Its real target is any alternative ecosystem that could pull developers away—whether from China’s open models or AMD’s ROCm, Intel’s oneAPI, or efforts to train AI on non-Nvidia platforms. Moreover, Nemotron appeals to a niche but crucial audience: enterprises and governments that demand transparency and control. Briski noted that many enterprise clients cannot deploy opaque models or build businesses on code they can’t audit. Nvidia aims to provide a reliable, continuously updated open-source roadmap—something competitors lack. This positions Nvidia at the heart of the global push for “sovereign AI”—nations like South Korea, India, and Middle Eastern countries seeking AI systems they can audit, regulate, and control. Closed models won’t suffice. Chinese models raise geopolitical concerns. Nvidia fills the gap. Ultimately, open source is not Nvidia’s goal—it’s a strategy. By offering powerful, transparent, and well-supported models, Nvidia ensures that the next generation of AI development is built on its hardware. The more developers use Nemotron, the more they depend on CUDA, the more they are locked into the Nvidia ecosystem. In the end, the open source is not about giving away the future—it’s about making sure that future runs on Nvidia’s chips.

Related Links

Nvidia's Open-Source Initiative Amid Retreat by AI Giants | Trending Stories | HyperAI