[16:08 Mon,14.March 2022 by Thomas Richter] |
Nvidia has unveiled Omniverse Audio2Face, a free program that allows 3D facial models to be animated simply by voice input - even real-time. Omniverse Audio2Face beta enables the animation of a 3D character using any voice recording, whether the character is to be animated for a game, a movie, a real-time digital assistant, or just for fun. Its special feature is that it not only animates the face in lip-sync with the voice recording, but it can also express the corresponding emotions via facial expressions to match the tone and content of the spoken words. The tool understands a whole range of different languages and can be used for real-time interactive applications live via microphone or as a traditional tool for creating facial animations using voice recordings. Audio2Face comes pre-installed with "Digital Mark" - a 3D character model that can be easily animated using your own audio for an easy start, you can of course also import and use your own models (for example from Epic Games Unreal Engine 4 Meta-Human). The audio input is then fed into a pre-trained Deep Neural Network and the output drives the 3D vertices of the character mesh to create the facial animation in real-time. Using various different post-processing parameters, the performance of the character can be optimized. Nvidia Omniverse Audio2Face Animation of multiple faces.Per Audio2Face, both human and human-like 3D faces, whether realistic or stylized, can be animated. Using multiple instances of Audio2Face, it is also possible to animate several different characters in a scene - all animated from the same or different audio tracks. For example, dialogue between a duo, a sing-off between a trio, a synchronized quartet - and much more can be brought to life. The facial expressions of individual faces can be selectively enhanced or attenuated au vh Nvidia Omniverse Audio2Face Targeted control of emotions.Audio2Face provides the ability to select specific emotions (such as anger, fear, sadness, or excitement) or for a face and control their expression. The AI then automatically animates the face, eyes, mouth, tongue, and head movement appropriately to achieve the selected emotional range and level of intensity. System requirements.System requirements are Windows 10 64-bit, at least an Intel i7 or AMD Ryzen CPU with 4 cores and 2.5 GHz, 16 GB RAM, a 500 GB SSD, and (of course) an Nvidia RTX GPU with at least 6 GB VRAM. Omniverse Audio2Face beta, which is based on a already published in 2017 study from Nvidia, can be downloaded for free at here. more infos at bei www.nvidia.com deutsche Version dieser Seite: Kostenloses Nvidia KI-Tool: Gesichter einfach per Sprache in Echtzeit animieren |