top of page

Automated Animation Pipeline

In this project, I developed an AI-driven animation pipeline that automated facial motion capture, rendering, and video generation using Audio2Face, Unreal Engine, and Python.

Role: Pipeline Developer | Technical Artist | Team Lead

Tools Used: NVIDIA Omniverse Audio2Face, Unreal Engine, Python, FFmpeg, Metahuman Creator

Company: Constituents AI


Overview

I collaborated with Constituents AI to develop an automated animation pipeline that streamlined the generation of facial animations from voice input. The system integrated multiple cutting-edge tools, including NVIDIA Omniverse Audio2Face (A2F), Unreal Engine Editor Scripting, and a custom web interface, to create a seamless animation workflow.

This pipeline automates the traditionally labor-intensive process of lip-sync animation and facial motion capture, significantly reducing time and manual effort while maintaining high-quality results.

My Contributions

  • Pipeline Integration & Development:

    • Designed and implemented the module that receives audio input from the web interface and processes it through the pipeline.

    • Developed a Python-based automation system linking NVIDIA Omniverse Audio2Face with Unreal Engine.

    • Ensured headless execution of Unreal Engine and A2F for optimal performance and scalability.

  • Animation Automation:

    • Configured Audio2Face to automatically generate keyframe animations based on voice input.

    • Automated the import of facial animations into a pre-setup Unreal Engine scene featuring a Metahuman model and a body animation generated from a video-to-animation tool.

    • Developed a robust workflow that synchronizes facial animations with pre-existing body motion, creating a fully animated character.

  • Rendering & Processing:

    • Automated movie rendering in Unreal Engine using the Movie Render Pipeline, outputting image sequences.

    • Implemented FFmpeg automation with Python to convert rendered image sequences into a finalized video format.

    • Developed a system to send the final video output back to the web interface, completing the pipeline.

Leadership & Teamwork

🔹 Led a team of junior developers, guiding them through Unreal Engine scripting and animation automation.

🔹 Received special appreciation for my ability to collaborate effectively across departments, ensuring smooth integration of different pipeline components.

🔹 Facilitated knowledge sharing sessions, helping the team understand and optimize AI-driven animation workflows.

Key Features & Achievements

End-to-End Automation: From raw audio input to a fully rendered animated video, the entire pipeline runs with minimal manual intervention.

Seamless Integration: Built an efficient bridge between AI-driven animation, Unreal Engine, and web-based interfaces.

Optimized Performance: Enabled headless execution of both Unreal Engine and Audio2Face, reducing computational overhead.

Scalability: The system can generate high-quality animations on a large scale, making it ideal for virtual influencers, AI-driven storytelling, and real-time animation production.

Results & Impact

🚀 Significantly reduced animation production time by automating facial animation, rendering, and video processing.

🎥 Enabled scalable content creation, making it feasible to generate high-quality animations from voice input with minimal effort.

🤖 Showcased the power of AI-driven animation, positioning this pipeline as a cutting-edge tool for animation studios and content creators.

Project Gallery

bottom of page