logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Free Plug-Ins For Seamless Integration Between Audio2Face And iClone

Leveraging NVIDIA's AI-powered technology, Reallusion announces the seamless integration of Character Creator, iClone, and Audio2Face with two free plug-ins, revolutionizing multi-lingual facial lip-sync animation production.

Released by NVIDIA in 2021, Audio2Face is an experimental AI-powered software for generating facial animation from audio sources available at NVIDIA's Omniverse. Its main feature is to generate lip-sync animation, and unlike many others, this software isn't limited to English and includes a Chinese-based AI model. Now, with the latest Reallusion announcement, it will be possible to create animation from any language, including "songs and gibberish".

Reallusion has released two free complimentary plug-ins linking iClone to Audio2Face in order to use NVIDIA's AI animation system to create lip-sync animations from audio tracks.

With just a single click, you can set up a Character Creator character in NVIDIA Audio2Face, animate it in real-time alongside an audio track, and seamlessly transfer the finished animation back to iClone for additional refinement before exporting it to 3D tools and game engines such as Unreal Engine, Unity, and more. 

The first new plug-in, CC Character Auto Setup, is made in collaboration between NVIDIA and Reallusion and is "condensing the manual 18-step process into a single step". 

By importing a CC character and choosing a training model (Mark and a new one, Claire), you can instantly receive a talking animation synchronized with audio files. You're free to experiment with motion sliders, automatic expressions, and even set keyframes. The final animation can be sent to iClone for further production.

Image Credit: NVIDIA, Reallusion

The second plug-in, Audio2Face for iClone, is designed to receive animation data from Audio2Face and includes adjustment controls, providing a full-spectrum animation refinement.

Animations can be tweaked via a dynamic interface. Regional facial adjustments include various parameters such as expression strengths or head movements. You can also enlarge the jaw open range to enhance emotional tension and control the position of the tongue to mimic precise pronunciation.

Image Credit: NVIDIA, Reallusion

Since generative AI animations are very noise-sensitive, particularly when the audio files are captured by low-quality devices or within unfavorable environments, this plug-in introduces a refined noise filter to eliminate jitters and achieve better results despite poor audio quality.

Image Credit: NVIDIA, Reallusion

After obtaining a satisfactory animation from Audio2Face, iClone is required for facial editing, allowing for refined lip sync, natural expression adjustments, and even the incorporation of movement sourced from mocap equipment.

Image Credit: NVIDIA, Reallusion

This integration of Character Creator, iClone, and Audio2Face marks a significant milestone in AI animation technology, offering artists unprecedented flexibility and efficiency in their workflows.

Learn more about the plug-ins and download them here for free. Don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on InstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more