Introduction
This workflow is based on the work by Jerry Davos, with modification. This workflow features:
Video generation using AnimateLCM, uses OpenPose controlnet for the pose of the character and a Lora for the flame animation.
Adding details to the face using GroundingDino and Segment Anything to get a mask of the character's face for the 2nd Pass KSampler.
Sharpening and Frame Interpolation as post processing.
Custom Nodes Needed
Recommend to install using ComfyUI Manager (https://github.com/ltdrdata/ComfyUI-Manager)
AnimateDiff Evolved
Advanced Auxilliary Preprocessors
Video Helper Suite
Advanced Controlnet
ImpactPack
Segment Anything
KJ Nodes
Essentials
Frame Interpolation
MaraScott Nodes
Instructions
The instructions on how to use this workflow are included in notes within the workflow. Please feel free to DM me or leave a comment if you have difficulty using this workflow.
Description
FAQ
Comments (12)
whats the usecase for this?
This is a vid-2-vid workflow that allows you to:
1) take a character pose from an original video
2) apply a lora (in this example was to create a fire mage character)
3) use the prompt to change things about the character (e.g. clothes, hairstyle)
4) have fine control over specifics of the face of the character (e.g. colour of the eyes) in the second pass
5) apply some simple post processing (sharpening and frame interpolation)
you could use it to create stylized tiktok dance videos
@PixelMuseAI I tried stylizing one dance video but I failed , it creates weird body movements and changes
@loneillustrator take a look here: https://civitai.com/images/14440586 there are comments on the uploaded video on the changes to the settings that is required to get the output.
@PixelMuseAI wish the face could be fixed
@loneillustrator in the workflow, there is a refiner for the face. or you can replace it with reactor face swap.
I keep getting this error when it hit's the Apply ControlNet (Advanced)
Error occurred when executing ControlNetApplyAdvanced: 'NoneType' object has no attribute 'copy' File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 803, in apply_controlnet c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent)) ^^^^^^^^^^^^^^^^
https://github.com/comfyanonymous/ComfyUI/issues/2440
do you happen to be using controlnet ip-adapter xl models?
@PixelMuseAI yes, however after posting I ran into the same problem testing a basic vid 2 vid workflow. I just recently redid my entire system last week. something may not have installed correctly
@PixelMuseAI now that I've had my coffee, I was trying to follow everything in your example to include every support file that was listed before going off road trying other settings. I may have to start over again
@shadeling1972394 try a SD1.5 model to see if the workflow runs.
@PixelMuseAI after finding the same problem with a basic workflow, I reset my system to factory default n started over from scratch. everything is working :)