CivArchive
    Tik Tok Dance Workflow (Mimic Motion + Animate Diff) - v3.1 16GB VRAM
    NSFW

    Version to Download

    v3.1 is a minor update to this workflow for users who have 16GB of VRAM. It generates the whole dance video in a single workflow.

    v4.0 of this workflow splits the workflow up into 5 parts to limit the VRAM usage to 12GB. Hopefully this opens the workflow up to a bigger group of users.

    ComfyUI Version

    This workflow was tested on ComfyUI v0.21 and should not have package conflicts when installing custom nodes from manager: https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.2.1

    Introduction

    This workflow uses Mimic Motion to generate the motion of the character and animate diff to create a 16 frame looping video of the background. Grounding Dino is then used to composite the character and background together. Best used with a driving video where the camera is static and all the motion is done by the dancer.

    The motivation is to leverage the strengths of both models to create as consistent a video as possible.

    Limitations

    If you watch the video above, you will realise that the dancer's interaction with her hair is not animated properly. The is due to the fact that Mimic Motion takes as input Open Pose information. If the depth information is used as well, the interaction with her hair would be animated properly.

    Nodes

    Nodes are colour coded:

    • Red: These models must be downloaded and placed in the appropriate folders. After downloading, refresh ComfyUI and ensure the correct models are selected.

    • Green: These nodes are models that will be automatically downloaded

    • Blue: These are user inputs, either a video with the original dance moves or a prompt / setting.

    • Brown: These are comments that should explain what is going on with the workflow.

    Please use ComfyUI manager to install the missing nodes. These are the custom nodes that are used by the workflow:

    • ComfyUI Impact Pack

    • ComfyUI's ControlNet Auxiliary Preprocessors

    • ComfyUI Frame Interpolation

    • AnimateDiff Evolved

    • ComfyUI-VideoHelperSuite

    • ReActor Node for ComfyUI

    • rgthree's ComfyUI Nodes, ComfyUI Essentials

    • ComfyUI-MimicMotionWrapper

    • segment anything

    • Comfyui lama remover

    Models

    Some models automatically download, some models are downloaded using ComfyUI manager and others have to be downloaded and placed in their respective folders manually.

    • AnimateLCM: https://huggingface.co/wangfuyun/AnimateLCM Put AnimateLCM_sd15_t2v.ckpt in "models/animatediff_models" and put AnimateLCM_sd15_t2v_lora.safetensors in "models/loras"

    • SD1.5 Checkpoint: you can use your favourite depending on the type of video you are generating (realistic / anime). I use Realistic Vision as I am aiming for a realistic generation.

    • control_v11p_sd15_openpose.safetensors. Download with ComfyUI manager

    • MimicMotion, Segment Anything, Grounding Dino, ReActor models should download automatically

    • Please follow the instructions in the Troubleshooting section (https://github.com/Gourieff/comfyui-reactor-node) if you are having troubles installing insightface

    Description

    Minor changes to the workflow

    • Updated the calculation of frames to use when less than 72 frames to frame_count + 1, this fixes a bug that uses double the sampling steps because there is 1 additional frame for the reference image.

    • Updated the Mimic Motion Scheduler to use Align Your Steps as per this issue: https://github.com/kijai/ComfyUI-MimicMotionWrapper/issues/8 and reduced the sampling steps to 10.

    • Introduced Condition Zero Out for the AnimateLCM conditioning as we are using CFG 1.0 which ignores the negative prompt.

    FAQ

    Comments (30)

    ZepesSep 13, 2024
    CivitAI

    v3.1 works great, thanks! Is there easy way add solid color background/photo, without animations?

    PixelMuseAI
    Author
    Sep 13, 2024

    Yes this is possible. You can load a photo of the correct resolution (512x768) and pass it to a image repeat batch node, before it is being composited with the character.

    Use set latent noise mask to only perform diffusion on the character for the 2nd Pass AnimateLCM KSampler

    PixelMuseAI
    Author
    Sep 16, 2024
    283024244943Sep 18, 2024· 1 reaction
    CivitAI

    The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/__torch__/torch/fx/graph_module/___torch_mangle_1305.py", line 966, in forward 7 = (Slice39).forward(onnx_initializer_30, onnx_initializer_31, onnx_initializer_28, onnx_initializer_29, 6, ) 8 = (Concat_40).forward(_1, 5, 3, 7, ) 9 = (Conv_41).forward(_8, ) ~~~~~~~~~~~~~~~~ <--- HERE 10 = (Mul43).forward(_9, (Sigmoid_42).forward(_9, ), ) 11 = (Conv44).forward(_10, ) File "code/__torch__/torch/nn/modules/conv/___torch_mangle_877.py", line 12, in forward bias = self.bias weight = self.weight input = torch._convolution(argument_1, weight, bias, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, False, False, True, True) ~~~~~~~~~~~~~~~~~~ <--- HERE return input Traceback of TorchScript, original code (most recent call last): /usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py(456): convforward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py(460): forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1508): slowforward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): callimpl /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): wrappedcall_impl <eval_with_key>.1(46): forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1508): slowforward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): callimpl /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): wrappedcall_impl /usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py(274): call /usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py(678): call_wrapped /usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py(1065): trace_module /usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py(798): trace <ipython-input-4-4b9aca434064>(8): <cell line: 8> /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3553): run_code /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3473): run_ast_nodes /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3257): run_cell_async /usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py(78): pseudosync_runner /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3030): runcell /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(2975): run_cell /usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py(539): run_cell /usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py(302): do_execute /usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper /usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(539): execute_request /usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper /usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(261): dispatch_shell /usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper /usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(361): process_one /usr/local/lib/python3.10/dist-packages/tornado/gen.py(786): run /usr/local/lib/python3.10/dist-packages/tornado/gen.py(825): inner /usr/local/lib/python3.10/dist-packages/tornado/ioloop.py(738): runcallback /usr/local/lib/python3.10/dist-packages/tornado/ioloop.py(685): <lambda> /usr/lib/python3.10/asyncio/events.py(80): run /usr/lib/python3.10/asyncio/baseevents.py(1909): runonce /usr/lib/python3.10/asyncio/base_events.py(603): run_forever /usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py(195): start /usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py(619): start /usr/local/lib/python3.10/dist-packages/traitlets/config/application.py(992): launch_instance /usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py(37): <module> /usr/lib/python3.10/runpy.py(86): runcode /usr/lib/python3.10/runpy.py(196): runmodule_as_main RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

    PixelMuseAI
    Author
    Sep 18, 2024

    I see that you are using python 3.10. This means that you are not using the windows portable v0.2.1 as recommended in the description. I would recommend you download v0.2.1 and use the portable version as all the dependencies have been tested on this version.
    https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.2.1

    Also, you seem to be having a cuDNN error. Please ensure you have CUDA v12.1 and the corresponding version of cuDNN installed.
    https://developer.nvidia.com/cuda-12-1-0-download-archive
    https://developer.nvidia.com/rdp/cudnn-archive

    Finally, there might be a need to downgrade to onnx version 1.15 if you are still coming into errors.

    ComfyUI is under active development and the custom nodes might have different requirements as the main project. Hence causing there to be some difficulty in maintaining a consistent python environment for all the nodes to work.

    283024244943Sep 18, 2024

    @PixelMuseAI I'm using 7900xtx

    PixelMuseAI
    Author
    Sep 18, 2024

    @283024244943 you probably should ask in a specific forum on AMD cards about your errors. As far as I know, it's not easy to get them working.

    Abominable_IntelligenceSep 20, 2024
    CivitAI

    Has anyone managed to get this working with 8gb VRAM?

    PixelMuseAI
    Author
    Sep 20, 2024

    The limiting factor for amount of VRAM used in this workflow is Mimic Motion.

    I haven't tested anything at 8GB VRAM so I can't tell you for sure. But definitely try AnimateLCM at low resolution and low total amount of frames.

    mkultra666Oct 14, 2024· 1 reaction
    CivitAI

    This is really excellent. thank you!

    BrostoherWadeOct 15, 2024
    CivitAI

    Freaking amazing! struggling to control the background though. Am i overlooking something?

    PixelMuseAI
    Author
    Oct 15, 2024

    There is a remap mask range node. The min value controls the denoise on the character and the max value controls the denoise on the background.

    If your background is warping every 16 frames, you can try to lower the max value. It's a balance between getting enough animation and the background warping.

    You can display the mask to see the colours of the mask on the person and background change accordingly.

    Another value you can tweak is the scale_multival, on the Apply AnimateDiff model node, try increasing this value to between 1.2 to 1.5 to get the background to animate more.

    caprismilesNov 3, 2024
    CivitAI

    Hi there... thanks for an excellent workflow...
    I'm reproducing the following error

    # ComfyUI Error Report

    ## Error Details

    - Node Type: MimicMotionGetPoses

    - Exception Type: AssertionError

    - Exception Message: ref_image and pose_images must have the same resolution

    ## Stack Trace
    ........
    assert ref_image.shape[1:3] == pose_images.shape[1:3], "ref_image and pose_images must have the same resolution"

    ```

    ## System Information

    - ComfyUI Version: v0.2.6-3-gf2aaa0a

    Ive used the same exact same checkpoints and the exact same reference video too
    Appreciate any help thanks

    PixelMuseAI
    Author
    Nov 3, 2024

    I'm sorry, it has been some time since I last worked on this workflow. There may have been some updates to ComfyUI / custom nodes that broke the functionality. I will need to run the workflow again to check it out.

    By any chance, have you edited the workflow to provide your own reference image? If yes, you might need to include an image resize node, to ensure that the reference image you provided and the driving video have the same resolution.

    caprismilesNov 9, 2024

    @PixelMuseAI Hi Ive got through this issue and now I'm stuck at ReActorFaceSwap node which fails with a TypeError. Any help would be greatly appreciated thanks

    PixelMuseAI
    Author
    Nov 10, 2024

    @caprismiles you can try to disable the face swap and check if you are satisfied with the final output.

    Start a fresh workflow and check if you can do a ReActor face swap.

    caprismilesNov 23, 2024

    @PixelMuseAI As suggested I've used the default workflow and added the Reactor FaceSwap node to it but I still get the same result error. I need the this node to work in the workflow... IS there anything I can do to get it working?
    comfyui-reactor-node/nodes.py", line 350, in execute

    script.process(

    File "...custom_nodes/comfyui-reactor-node/scripts/reactor_faceswap.py", line 101, in process

    result = swap_face(

    caprismilesNov 23, 2024

    @PixelMuseAI the
    GFPGANv1.4.pth

    is placed in ComfyUI/models/facerestore_models

    PixelMuseAI
    Author
    Nov 23, 2024

    @caprismiles it seems like your problem is not with the workflow. rather it is with ReActor. Please refer to the troubleshooting section of the ReActor Github page: https://github.com/Gourieff/comfyui-reactor-node

    caprismilesNov 25, 2024

    @PixelMuseAI Thanks I managed to sort out the issue... you were right! I completely removed comfyui and reinstalled it using miniconda which installed python 3.12 and then all issue were sorted and working...

    CrazyBobbyHawksNov 14, 2024
    CivitAI

    Getting an error that the ControlNetApply node requires a VAE input. "This Controlnet needs a VAE but none was provided, please use a ControlNetApply node with a VAE input and connect it."

    Did I not load all the workflows properly?

    PixelMuseAI
    Author
    Nov 14, 2024

    This should have been introduced by a recent change to the apply controlnet node, just hook up the vae from the load checkpoint node.

    CrazyBobbyHawksNov 19, 2024

    @PixelMuseAI Thank you. Now I get an error that reads "KSampler

    Given groups=1, weight of size [1536, 16, 2, 2], expected input[2, 4, 96, 64] to have 16 channels, but got 4 channels instead"

    Any idea what's going on?

    PixelMuseAI
    Author
    Nov 21, 2024· 1 reaction

    @CrazyBobbyHawks I just tested the workflow and everything worked without need for any modification. Update both comfyui and your custom nodes and try again.

    CheesecakeJohnsonJan 29, 2025
    CivitAI

    That's really cool. I'm thinking of how I could make a character turnaround with this... Maybe I could use blender 3D and turn a generic character then reproduce the pose.

    PixelMuseAI
    Author
    Feb 14, 2025· 1 reaction

    there might be some limitations with mimic motion to get a character to do a full round.

    i have not experimented much with dance workflows recently. however, i noticed some new frameworks that might work better than mimic motion.

    https://github.com/Isi-dev/ComfyUI-UniAnimate-W

    if you can animate a human like character in blender, you can try to extract the pose using DW pose. you might be able to do some dance videos like that.

    CheesecakeJohnsonFeb 20, 2025

    cool thank you

    learmonthtennisclub471Feb 12, 2025· 1 reaction
    CivitAI

    What is the best way to have no background or a transparent background? Great w/f!

    sidifov173366Jul 13, 2025
    CivitAI

    I had to run part 2 with an INT Constant of 7 to get past OOM errors. GPU 6700XT 12GB

    Workflows
    SD 1.5

    Details

    Downloads
    2,903
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/7/2024
    Updated
    5/13/2026
    Deleted
    -

    Files

    tikTokDanceWorkflowMimic_v3116GBVRAM.zip