Omniverse and Digital Twins Focus of NVIDIA Keynote Announcements

0
Omniverse and Digital Twins Focus of NVIDIA Keynote Announcements

The Nvidia 2021 Global GTC takes place from November 8-11. The event is attended by over 200,000 developers, innovators, researchers and designers worldwide. On the agenda: deep learning, data science, high performance computing, robotics, network data centers and graphics. During the opening keynote, CEO and founder Jensen Huang made several major announcements, particularly regarding the Omniverse program and digital twins.

Augmented reality, virtual reality and Multi-GPU at the heart of Omniverse

Since its open beta launch in December, this multi-software 3D collaboration solution has been downloaded more than 70,000 times. It is used by professionals in more than 700 companies, including BMW Group, Cannon Design, Epigraph, Ericsson, architectural firms HKS and KPF, Lockeed Martin and Sony Pictures Animation

New features launched this week by NVIDIA for Omniverse include:

  • Omniverse Avatar, a technology platform that connects NVIDIA technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies to generate interactive AI avatars. Avatars created on the platform are interactive characters with ray-tracing 3D graphics that can see, speak, converse on a wide range of topics, and understand naturally spoken intent. Jensen Huang stated:

    “The dawn of intelligent virtual assistants has arrived. Omniverse Avatar combines NVIDIA’s core graphics, simulation and AI technologies to create some of the most complex real-time applications ever created. The use cases for collaborative robots and virtual assistants are incredible and far-reaching.”

  • Omniverse Replicator, a powerful synthetic data generation engine that produces physically simulated synthetic data to train deep neural networks. In its first implementations of the engine, the company showcased two applications to generate synthetic data: one for NVIDIA DRIVE Sim™, a virtual world to house the digital twin of autonomous vehicles, and another for NVIDIA Isaac Sim™, a virtual world for the digital twin of manipulation robots. DRIVE Sim is a simulation tool built on Omniverse that takes advantage of the platform’s many features. The data generated by DRIVE Sim is used to train the deep neural networks that make up the perception systems of autonomous vehicles. For the NVIDIA DRIVE team, synthetic data has been an essential and effective part of their AV development workflow. The deep neural networks that power the perception of an autonomous vehicle are composed of two parts: an algorithmic model and the data used to train that model. With synthetic data generation, developers have more control over the development of the data, tailoring it to the specific needs of the model.

Omniverse will also offer several new features including:

  • NVIDIA CloudXR, an enterprise-class immersive streaming framework, has been integrated into the Omniverse kit (a toolkit for building native Omniverse applications and microservices) enabling users to interactively stream Omniverse experiences to their AR and VR mobile devices.
  • Omniverse VR allowing developers to create their own VR-enabled tools on the platform and end users to directly benefit from virtual reality capabilities.
  • Omniverse XR Remote provides augmented reality capabilities and virtual cameras, allowing designers to view their fully ray traced assets via iOS and Android devices.
  • Omniverse Farm allows teams to use multiple workstations or servers together to perform tasks such as rendering, synthetic data generation or file conversion.
  • Omniverse Showroom, available as an app in Omniverse Open Beta, allows non-technical users to play with Omniverse technology demos that showcase the platform’s real-time physics and rendering technologies.

AR, VR and Multi-GPU rendering are among the new features being developed for Omniverse. Also of note are announcements about integrations for infrastructure and industrial digital twin applications with Bentley Systems and Esri software that allow engineers and designers to create physically accurate digital twins of buildings and products, or create massive, true-to-life simulation environments for training robots or autonomous vehicles prior to deployment in the physical world

  • With Bentley iTwin for NVIDIA Omniverse, iTwin users can virtually explore massive, physically accurate industrial facilities and offshore structures as if they were walking through them in real time.
  • Esri, a leading provider of urban design mapping software, brings the Esri ArcGIS CityEngine application to Omniverse, connecting the millions of users in the ArcGIS ecosystem to the Omniverse platform.

Another important point of these announcements is that Omniverse Enterprise is now available. It allows global 3D design teams working across multiple software suites to collaborate in real time, in a shared virtual space, from any device.

Use Case: Ericsson creates digital twins for 5G networks in NVIDIA Omniverse

The true-to-life simulation platform has been deployed to develop 5G networks, improve functionality and services.

Ericsson, the Stockholm-based telecommunications equipment manufacturer, combines decades of expertise in radio network simulation with Nvidia Omniverse Enterprise, a real-time virtual world simulation and collaboration platform for 3D workflows. In NVIDIA Omniverse, Ericsson builds city-scale digital twins to help accurately simulate the interaction between 5G cells and the environment for maximum performance and coverage.

5G enables a multitude of new use cases from IoT and manufacturing to autonomous cars and telehealth. Without a digital twin approach, the interaction between radio transmitters, the environment, humans and moving devices could not be truly understood due to lack of detail.

German Ceballos, researcher at Ericsson, said:

“Before Omniverse, network coverage and capacity were analysed by simplifying many aspects of complex interactions, such as physical phenomena and mobility aspects. Now we will be able to simulate network deployments and functionality at a very detailed scale using Omniverse.”

Ericsson’s city-scale digital twin, built in NVIDIA Omniverse, simulates the proper placement of 5G microcells and towers to optimize performance and coverage

.
In addition, with the emerging capabilities of the Omniverse platform such as Omniverse VR, network engineers could soon put on a virtual reality headset and explore any part of any model, at 1:1 scale, adjust settings, antenna and literally “see” the effects of things that are not visible in real life.

For Ericsson, the Omniverse digital twin offers universal telecommunication information, faster development cycles and the ability to create a state-of-the-art network at lower cost.

Focus on the Sim-Real domain gap problem

Training on data sets that do not translate to the physical world can actually degrade the performance of a network.

This sim-real gap occurs mainly in two ways. An appearance gap that corresponds to pixel-level differences between the simulated image and the real image, which is caused by the way the simulator generates the data, and a content gap that can be due to the lack of diversity in the real-world content and differences between the sim and real-world contexts.

Omniverse Replicator is designed to reduce the discrepancies in appearance and content.

Translated from Omniverse et les jumeaux numériques au centre des annonces de la keynote NVIDIA