HyperAIHyperAI
Back to Headlines

How a small research lab sparked Nvidia’s rise to a $4 trillion empire

4 days ago

When Bill Dally joined Nvidia’s research lab in 2009, it was a small team of about a dozen people focused primarily on ray tracing, a technique used in computer graphics. Dally, who had been consulting for Nvidia since 2003 while teaching at Stanford, was approached by David Kirk and CEO Jensen Huang with a strong pitch to join full time. They made what Dally described as a “full-court press,” ultimately convincing him to leave academia and help build Nvidia’s research vision. Dally took the helm of the lab and immediately began expanding its scope beyond graphics. Researchers began exploring circuit design and VLSI—very large-scale integration—laying the groundwork for more advanced chip architectures. But the real turning point came when the team started focusing on artificial intelligence. Long before AI became a global phenomenon, Nvidia’s research lab was already investigating the potential of GPUs for AI workloads. In 2010, Dally and his team recognized that AI would transform computing. They pushed for a strategic shift—specializing GPUs for AI and developing supporting software. Jensen Huang embraced the vision, and Nvidia began investing heavily in AI infrastructure, long before the market demand exploded. Today, Nvidia’s dominance in AI chips is undeniable. But the company isn’t resting. It’s now turning its attention to physical AI and robotics—aiming to become the “brain” behind future robots. Sanja Fidler, who joined Nvidia’s research team in 2018, is at the forefront of this effort. At MIT, she was already working on robot simulation using AI. When she shared her research with Jensen Huang at a conference, he invited her to join—not as an employee, but as a collaborator. “Come work with me, not with us,” he told her. The invitation was compelling, and she joined. Fidler launched the Omniverse research lab in Toronto, focused on building realistic simulations for physical AI. One of the biggest challenges was creating high-quality 3D data from images and videos. To solve this, Nvidia invested in differentiable rendering—a technology that allows AI to reverse the rendering process, turning 2D images into 3D models. In 2021, Nvidia released GANverse3D, the first model capable of generating 3D scenes from images. Building on that, the team developed the Neuric Neural Reconstruction Engine in 2022 to process video data from robots and self-driving cars. These tools became the foundation for Nvidia’s Cosmos family of world models, unveiled at CES in January. Now, the focus is on speed. Simulations must run in real time for robots to react quickly. Fidler emphasizes that robots don’t need to experience time at the same pace as the real world—they can process it 100 times faster. By making these models significantly faster, Nvidia aims to accelerate the development of practical robotics. At SIGGRAPH, Nvidia announced new world AI models for generating synthetic data to train robots, along with new software libraries and tools for robotics developers. Despite the excitement around humanoid robots, both Dally and Fidler remain cautious. They believe it will still take several years before robots are common in homes. Still, they see steady progress—driven by advances in visual AI, generative AI for planning, and growing datasets. “As we solve each problem and scale the data, these robots will get smarter,” Dally said. “The foundation is being built.”

Related Links