CivArchive
    Tik Tok Dance Workflow (Animate LCM, IPAdapter, Controlnet) - v2.0
    NSFW

    Introduction

    This workflow is used to create AI Tiktok dance videos, with the action of the AI avatar driven by an actual dance video.

    v7 includes audio reactive background using Depthflow and RyanOnThe Inside nodes.

    Models Needed

    The models required are the red nodes in the workflow, explanation is in the notes (brown nodes) in the workflow. Recommend to install with ComfyUI Manager.

    Custom Nodes Needed

    Install missing custom nodes using manager.

    • ComfyUI's ControlNet Auxiliary Preprocessors

    • ComfyUI Frame Interpolation

    • ComfyUI_IPAdapter_plus

    • ComfyUI-Advanced-ControlNet

    • AnimateDiff Evolved

    • ComfyUI-VideoHelperSuite

    • rgthree's ComfyUI Nodes

    • ComfyUI Essentials

    • KJNodes for ComfyUI

    • Crystools

    • ComfyUI-Inspyrenet-Rembg

    • Depthflow Nodes

    • RyanOnTheInside

    Description

    Updated workflow to use AnimateDiff LCM as it gave more realistic output

    Adjusted strength_model and strength_clip in the Load LoRA node

    Adjusted beta_schedule in the Use Evolved Sampling node

    FAQ

    Comments (7)

    loneillustratorApr 10, 2024
    CivitAI

    How to make the face better? It really maintains the custom but the facial details come weird , why?

    PixelMuseAI
    Author
    Apr 11, 2024

    You can try to use the face ID IP adapters to improve face generation.

    If you upload your source video, reference image (background + character) and face and PM me the download link, I can help to work on it to improve the generation.

    loneillustratorApr 15, 2024

    @PixelMuseAI where can i upload?

    PixelMuseAI
    Author
    Apr 16, 2024

    @loneillustrator you can upload to sites like Imgur or Google drive

    xbwuApr 16, 2024
    CivitAI

    How to keep the face of the original video consistent with the face of the resulting video?

    PixelMuseAI
    Author
    Apr 16, 2024

    I used a 100% denoise for this workflow, which means nothing from the original video is retained.

    Try to incorporate one or more of the following to retain the original face:

    IPAdapter methods

    1) Pass an image with the original face to the IPAdapter (easiest method, check to see if this works first)

    2) Add a IPAdapter face ID node

    Retaining Parts of the original video (try 3 and if results are not satisfactory, add 4)

    3) VAE Encode the original video and use a lower denoise to retain some of the original features

    4) Incorporate an inpainting workflow where you mask out the face using MediaPipe Facemesh. Denoise can be adjusted back to 1.

    Post Processing Methods (use either 5 or 6)

    5) Use ReActor to swap the original face back into the video

    6) Use face detailer with a FaceID IPAdapter, select every 2nd frame in the original video and use RIFE to interpolate to avoid flickering on the face

    If you need any help, upload your original dance video and an image of your AI avatar to Google drive, send me the link in a DM and I'll see what I can do to update the workflow.

    xbwuApr 17, 2024

    @PixelMuseAI thank u so much

    Workflows
    SD 1.5

    Details

    Downloads
    281
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/8/2024
    Updated
    5/13/2026
    Deleted
    -

    Files

    tikTokDanceWorkflowAnimate_v20.zip

    Mirrors