NVIDIA Unveils Open Physical AI Models and Frameworks to Accelerate Robotics and Autonomous Systems Development
At CES 2026, NVIDIA unveiled a powerful suite of open physical AI models and frameworks designed to accelerate the development of robots and autonomous systems. This advancement is part of the broader Into the Omniverse series, highlighting how open standards like OpenUSD and the NVIDIA Omniverse are transforming workflows for developers, 3D creators, and enterprises. Open source is proving vital in robotics, enabling shared access to simulation tools, AI models, and infrastructure. NVIDIA’s ecosystem empowers developers to build safer, more intelligent physical AI systems by offering a modular, end-to-end toolkit that spans simulation, synthetic data generation, cloud orchestration, and edge deployment. Central to this stack is OpenUSD, which standardizes 3D data exchange and allows seamless reuse of digital twins across development stages. NVIDIA Omniverse libraries serve as the foundation for high-fidelity simulation, providing ground-truth environments that feed into every layer of the AI development pipeline. On the show floor, companies demonstrated real-world applications of the NVIDIA physical AI stack. Caterpillar introduced the Cat AI Assistant, powered by NVIDIA Nemotron open models and running on the Jetson Thor edge module. This system allows operators to interact with heavy machinery using natural language, receiving step-by-step guidance and adjusting safety settings via voice commands. Behind the scenes, Caterpillar uses Omniverse to build digital twins of factories and job sites, simulating workflows and traffic patterns before deploying changes in the real world. LEM Surgical showcased its FDA-cleared Dynamis Robotic Surgical System, a dual-arm humanoid robot designed for spinal surgery. It leverages Jetson AGX Thor for processing, Holoscan for real-time sensor handling, and Isaac for Healthcare to train its autonomous arms. The system uses NVIDIA Cosmos Transfer to generate physically accurate synthetic training data and Isaac Sim for digital twin simulation, enabling precise, repeatable surgical procedures that reduce strain on medical teams. NEURA Robotics is building cognitive robots using the full NVIDIA stack. Its 4NE1 humanoid and MiPA service robots are trained in OpenUSD-based digital twins via Isaac Sim and Isaac Lab. The company uses Isaac GR00T-Mimic to post-train foundation models and is collaborating with SAP and NVIDIA to integrate SAP’s Joule agents into its Neuraverse ecosystem, simulating complex behaviors before real-world deployment. AgiBot leverages NVIDIA Cosmos Predict 2 as the core world model for its Genie Envisioner platform, generating action-conditioned videos grounded in realistic physics and visual priors. When combined with Isaac Sim and Isaac Lab, and fine-tuned with proprietary data, policies trained in Genie Envisioner transfer more reliably to physical robots like Genie2 humanoids and compact tabletop units powered by Jetson Thor. Intbot is using NVIDIA Cosmos Reason 2 to equip its social robots with advanced reasoning capabilities. The model helps robots interpret social cues and safety context, enabling more natural, context-aware interactions. Intbot’s Cosmos Cookbook demonstrates how reasoning vision-language models can guide decisions on when and how to speak, improving human-robot collaboration. NVIDIA also launched Agile, an Isaac Lab-based engine for humanoid locomotion and manipulation. It offers a complete sim-to-real verified workflow for training reinforcement learning policies on platforms like the Unitree G1 and LimX Dynamics TRON. With built-in task configurations, decision-making models, training tools, and evaluation frameworks, Agile streamlines policy development and improves real-world transfer. Hugging Face and NVIDIA are integrating Isaac GR00T N models and simulation tools into the LeRobot ecosystem, allowing developers to train and evaluate policies directly within LeRobot. The open-source Reachy 2 humanoid is now fully compatible with Jetson Thor, enabling direct deployment of advanced vision-language-action models. ROBOTIS has created a full sim-to-real pipeline using Isaac technologies. It begins with high-fidelity data generation in Isaac Sim, scales training with GR00T-Mimic for augmentation, and fine-tunes a VLA-based Isaac GR00T N model for direct hardware deployment—significantly speeding up the journey from simulation to real-world performance. These advancements highlight how open frameworks, shared digital twins, and integrated toolchains are accelerating the evolution of physical AI, bringing intelligent robots closer to real-world impact.
