Physical AI is the next new tech trend in AI (artificial intelligence). Physical AI models involve controlling autonomous robots and autonomous cars that can perform real life functions by understanding instructions and perceiving, interacting and performing complex actions.
Physical AI uses techniques like other large language models to achieve tasks that go beyond today’s AI:
Similar to how large language models can process and generate text, physical AI models can understand the world and generate actions. To do this, these models must be trained in simulation environments to comprehend physical dynamics, like gravity, friction or inertia — and understand geometric and spatial relationships, as well as the principles of cause and effect.
NVIDIA announced recently at CES generative AI models and blueprints propelling Omniverse integration of physical AI applications for robotics, autonomous vehicles and vision AI usage. NVIDIA Omniverse, using OpenUSD, which will assist in developing AI and controllable simulations products and services enabling the creation of true-to-reality virtual worlds, known as digital twins, that will ultimately be used to train physical AI.
NVIDIA Cosmos was introduced recently and is a platform of state-of-the-art generative world foundation models, advanced tokenizers, guardrails and an accelerated video processing pipeline purposely developed to accelerate physical AI development.
Normally, producing physical AI models is time consuming and expensive ordeal consisting of huge amounts of real-world data and testing:
Cosmos’ world foundation models (WFM), which predict future world states as videos based on multimodal inputs, provide an easy way for developers to generate massive amounts of photoreal, physics-based synthetic data to train and evaluate AI for robotics, autonomous vehicles and machines. Developers can also fine-tune Cosmos WFMs to build downstream world models or improve quality and efficiency for specific physical AI use cases.

Omniverse and Cosmos together produce a data multiplication engine. Omniverse can be used to create 3D scenarios, then outputted to Cosmos for videos and variations generation. Thus autonomous vehicles and robots Physical AI systems can be developed more quickly by producing more training data representing real-life situations.
The robotics and automotive companies 1X, Agile Robots, Agility Robotics, Figure AI, Foretellix, Fourier, Galbot, Hillbot, IntBot, Neura Robotics, Skild AI, Virtual Incision, Waabi and XPENG, and Uber are already using Cosmos.
Cosmos world foundation models (WFM) maintain a unified framework for using large-scale AI models. Automotive, industrial and robotics sectors can embrace generative physical AI and simulation to boost technological innovation and streamline operations for a more productive and successful business necessary for today’s competitive business world.
Cosmos/Physical AI Applications
Humanoid Robots:
The NVIDIA Isaac GR00T Blueprint for synthetic motion generation helps developers generate massive synthetic motion datasets to train humanoid robots using imitation learning. With GR00T workflows, users can capture human actions and use Cosmos to exponentially increase the size and variety of the dataset, for more robust training physical AI systems.
Autonomous Vehicles:
Autonomous vehicle (AV) simulation powered by Omniverse Sensor RTX application programming interfaces lets AV developers replay driving data, generate new ground-truth data and perform closed-loop testing to accelerate their pipelines. With Cosmos, developers can generate synthetic driving scenarios to amplify training data by orders of magnitude, accelerating physical AI model development for autonomous vehicles.
Industrial Settings:
Mega is an Omniverse Blueprint for developing, testing and optimizing physical AI and robot fleets at scale in a USD-based digital twin before deployment in factories and warehouses. The blueprint uses Omniverse Cloud Sensor RTX APIs to simultaneously render multisensory data from any type of intelligent machine, enabling high-fidelity sensor simulation at scale. Cosmos can enhance Mega by generating synthetic edge case scenarios to amplify training data, improving the robustness and efficiency of training robots in simulation.