This workflow uses an existing video of a person talking to drive an AI generated video. The structure of the background and likeness of the AI avatar in the generated video are from images provided by the user.
The main feature of this workflow is the utilisation of facemesh to find out where the lips are, then passing this mask to the lineart controlnet to generate lips that are sync-ed to what the person is saying in the original video.
This workflow also features:
1) AnimateDiff LCM for faster video generation
2) IPAdapter V2 for AI Avatar likeness in the generated video
3) Controlnet to control head pose
4) Controlnet to control the background in the generated video
Description
Details
Downloads
41
Platform
CivitAI
Platform Status
Deleted
Created
4/24/2025
Updated
5/6/2025
Deleted
4/24/2025
Files
talkingAvatarVideo_v10.zip
Mirrors
CivitAI (1 mirrors)