Nvidia shows ACE for Games technology with generative AI for NPCs

Spread the love

Nvidia will demonstrate its Avatar Cloud Engine for Games technology with generative AI during Computex. This should allow players to have realistic conversations with NPCs. The technology can run locally and in the cloud, the company reports.

Nvidia’s ACE for Games allows users to talk to NPCs in games through their microphone. The technique uses a generative AI language model that can generate answers. Nvidia offers that its NeMo framework, which allows developers to create, modify, and use language models in their games. Developers can customize the language models with character lore and backstories and prevent inappropriate responses with NeMo Guardrails.

ACE for Games also includes Nvidia Riva, the company’s text-to-speech and speech-to-text technology. This converts the questions that players ask into text. That text is entered into the language model to generate an answer, after which Riva is used again to convert that answer into a verbal answer.

An Nvidia Omniverse Audio2Face model can in turn generate realistic facial expressions and animations. This should, among other things, ensure that the mouth movements correspond to the answers given by the NPC. That AI technology already existed and will be used, among other things, for facial animations in the upcoming game Stalker 2: Heart of Chernobyl, although without generative AI.

The company showed during Computex a demo of ACE for Games, which the company created in collaboration with AI company Convai and is rendered in Unreal Engine 5. In the demo, a player talks to an NPC named Jin, who owns a noodle shop. The player asks questions through his microphone and Jin gives appropriate answers. Ultimately, Jin gives the player a quest in which a powerful criminal boss must be stopped.

According to Nvidia, ACE for Games’ neural networks can be optimized for different systems, allowing developers to make trade-offs in size, performance and quality. The models can be run in the cloud or locally on PCs. According to Nvidia, the models have been optimized for as little latency as possible.

You might also like