Introduction
This workflow is used to create AI Tiktok dance videos, with the action of the AI avatar driven by an actual dance video.
v7 includes audio reactive background using Depthflow and RyanOnThe Inside nodes.
Models Needed
The models required are the red nodes in the workflow, explanation is in the notes (brown nodes) in the workflow. Recommend to install with ComfyUI Manager.
SD1.5 LCM: https://civarchive.com/models/81458?modelVersionId=256668
SDXL Lightning: https://civarchive.com/models/133005/juggernaut-xl?modelVersionId=920957
AnimateLCM_sd15_t2v.ckpt (https://huggingface.co/wangfuyun/AnimateLCM)
Install Using Manager:
IPAdapter Models. ip-adapter_sd15.safetensors, ip-adapter-plus_sd15.safetensors.
Clip-Vision. CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors.
AnimateDiff V3: v3_sd15_mm.ckpt, v3_sd15_adapter.ckpt.
xinsir/ControlNet++: All-in-one ControlNet (ProMax model).
control_v11f1p_sd15_depth.safetensors
depth_anything_v2_vitl.pth
Custom Nodes Needed
Install missing custom nodes using manager.
ComfyUI's ControlNet Auxiliary Preprocessors
ComfyUI Frame Interpolation
ComfyUI_IPAdapter_plus
ComfyUI-Advanced-ControlNet
AnimateDiff Evolved
ComfyUI-VideoHelperSuite
rgthree's ComfyUI Nodes
ComfyUI Essentials
KJNodes for ComfyUI
Crystools
ComfyUI-Inspyrenet-Rembg
Depthflow Nodes
RyanOnTheInside
Description
1) Background and character are separate input images
2) Preview bridge to mask the portions of the background you want to animate
3) Differential Diffusion to denoise various parts of the image by different amounts, in an attempt to achieve greater consistency
FAQ
Comments (17)
When I get to the IPAdapter Unified Loader node i get error ClipVision model not found. Anyone know what I'm missing or how I can fix this?
Hi, go into Comfy Manager (install here if you do not have it: https://github.com/ltdrdata/ComfyUI-Manager ), search for clip. Install CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
If you want to learn more about IPAdapters, please visit: https://www.youtube.com/@latentvision and subscribe., Latent Vision is the author of the IPAdapter nodes in ComfyUI. His tutorials on IPAdapters are the best there are.
@PixelMuseAI thank you for your help :) I installed it and it looks to have gone past that stage now.
I got this error, can you help me?
Error occurred when executing KSampler:
'ModuleList' object has no attribute '1'
https://github.com/LucianoCirino/efficiency-nodes-comfyui/issues/227
might want to check your controlnets, if they are setup to use SD1.5 controlnets.
@PixelMuseAI Solved, I really appreciate!
tipping 100 buzz for this! the easy to follow instructions made this the FIRST working workflow for me! thats amazing, now I gotta figure out how it works so i can change lengths! maybe do one with just openpose
It's easy to change the lengths, you can load a shorter video or limit the number of frames in the load video node.
To do this with just openpose, you can bypass the depth controlnet node.
If you need specific help, feel free to send me a DM.
Hi thanks for a great workflow.... the least complex of them all.. Ive got everything organised as per your worklfow guidelines but I keep getting this error on Queue
ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 515, in load_models raise Exception("IPAdapter model not found.")
I have all the relevant ipadapters in the folder. Everyone as suggested
line 515 says
# 2. Load the ipadapter model
is_sdxl = isinstance(model.model, (comfy.model_base.SDXL, comfy.model_base.SDXLRefiner, comfy.model_base.SDXL_instructpix2pix))
ipadapter_file, is_insightface, lora_pattern = get_ipadapter_file(preset, is_sdxl)
if ipadapter_file is None:
raise Exception("IPAdapter model not found.")
In the IPAdapterPlus.py script as above it is pointing to SDXL but I am using sd1.5
DO I have to change the script?
What am I doing wrong?
Any help would be great thanks
Can you tell me which SD1.5 checkpoint you are using and which IPAdapter models you have installed (Go to ComfyUI/models/ipadapter and tell me what files you have)? I will try to replicate your setup to check which models you are missing.
@PixelMuseAI Thanks for your prompt response. I'm using Stability Matrix as a package manager for Comfyui - so my ip adapter is located in stability-matrix/Data/Packages/ComfyUI/models/ipadapter/ip-adapter_sd15.safetensors
an checkpoint that I'm using is located in stability-matrix/Data/Models/StableDiffusion/absoluterealindian_v10.safetensors
@caprismiles
You will need the clip vision models (put in ComfyUI/models/clip_vision): CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
You will also need the IPAdapter models (put in ComfyUI/models/ipadapter): ip-adapter_sd15.safetensors, ip-adapter_sd15_light.safetensors, ip-adapter_sd15_vit-G.safetensors, ip-adapter-plus_sd15.safetensors.
@PixelMuseAI Really sorry to keep bothering you. After I downloaded the additional models clip vision and ip adapter models and placed them in their respective folder, I still keep reproducing the same error.
@caprismiles
You can check the discussion here: https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/313 to see if there is anything that helps you.
Please try to update ComfyUI and IPAdapter to the latest version. I would suggest you try a fresh install of ComfyUI (without Stability Matrix and everything downloaded through the ComfyUI-Manager) to see if the problem goes away.
I have checked your model and it is a SD1.5 model. https://civitai.com/models/121035/absoluterealindian IPAdapter nodes will look for the appropriate IPAdapter models based on your checkpoint. Maybe your checkpoint is really a SDXL checkpoint, then you can try to download the SDXL IPAdapter models through manager. I recommend manager because it automatically puts the models in the correct folders.
If all else fails, you can also try to reach out to cubiq who is the author of the IPAdapter ComfyUI nodes for help. https://github.com/cubiq/ComfyUI_IPAdapter_plus
@PixelMuseAI Thank you very much for your (as usual) prompt guidance. Will do the needful
Hello, I've already used your workflow and it's fantastic!
Now I've the same problem I had at first run, in the final video, character change her dress continuosly and I don't remember how I fixed it the first time :) Any suggestion? Thanks!
one possible cause is that you are using only openpose controlnet. i noticed that if you are only using the single openpose controlnet, the outfit is not as consistent. the depth controlnet is there to help with character consistency.
if you need further help from me, please upload your original video, and IPAdapter image to a google drive folder and share the link with me. also, provide me with your prompts. i can then help to debug for your specific use case.