To initiate the generation process, simply drag and drop an image into the orange "Load Image" node.
Feel free to adjust the main prompt and image qualifiers to refine the context as desired. The workflow base settings generate some awesome animations.
1. Loader
While any SD1.5 model is compatible, it's important to calibrate the LCM Lora weight accordingly:
•For the SD1.5 model, set the weight between 0.7 and 1.
•For the SD1.5 LCM model, adjust the weight to fall between 1 and 2.
Main Model:
•https://civarchive.com/models/306814/photon-lcm
LCM Lora:
•https://huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v_lora.safetensors
2. IPAdapter Section:
In the IPAdapter Section, the first image loader is designated for the primary image, which will be cropped to match the composition size. The secondary loader is intended for a composition target image.
ClipVision:
•https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/blob/main/model.safetensors
IPAdapter PLUS:
•https://github.com/cubiq/ComfyUI_IPAdapter_plus
IPAdapter Composition:
•https://huggingface.co/ostris/ip-composition-adapter/tree/main
3. AnimateDiff Section:
Modify the motion strength using the "Multival dynamic" node.
Motion Model:
•https://huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v.ckpt
4. ControlNet Section
Choose from three available ControlNets:
Tile:
•https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth
Reference ControlNet:
•Comes integrated with the Advanced ControlNet custom nodes.
SparseCtrl:
•https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_sparsectrl_rgb.ckpt
5. KSampler / HighRes Fix
The "PatchModelAdd" node is designed to prevent deformations and artifacts when exceeding the sd1.5 aspect ratio limits. This step can be bypassed.
No adjustments needed for Ksampler options.
Fine-tune the Highres fix script based on your initial dimensions and VRAM capacity.
Choose between an empty latent or one with your image injected; the latter typically retains more of the original image and often results in less motion.
6. Output
"GMFSS Fortuna" will interpolate your video, providing a smoother and elongated output. The multiplier is usually set to 2 or 3.
Specify a custom directory and filename below the video output to maintain an organized output folder.
An optional HDR output flow has been included for your convenience.
Without altering any resolution settings, expect this workflow to:
•Complete a render in approximately 2 minutes (32 frames + interpolation)
•Utilize 16GB of VRAM
•Produce an output of 752x1344 pixels
Description
FAQ
Comments (24)
This looks really cool! I'll try it once I'm home, but I've had issues trying other people's workflows, as in installing the missing nodes with comfui manager and sometimes having different nodes clash with one another and unfortunately breaking ones I had working beforehand. Is there a way on comfyui I can try this and if I run into problems there's a simple way to 'revert' to how my comfyui was before I installed these nodes?
I run multiple comfy installs on pinokio.computer. You can have other instances just to test workflows and keep your main intact.
did you have the controlnet tile functioning ?
Very very nice, works great, is it possible to use this with InstantID?
with some tweaks in the workflow i think yes
did you have the controlnet tile functioning ?
Happy this worked out :) Works well with LCM
did you have the controlnet tile functioning ?
I can't rate this highly enough
did you have the controlnet tile functioning ?
I couldn't get it to work properly, getting this error from the ControlNet/Tile, i have the models and ckpt in the right place, below is the error i encounter
Traceback (most recent call last):
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 705, in load_controlnet
controlnet = comfy.controlnet.load_controlnet(controlnet_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 326, in load_controlnet
controlnet_data = comfy.utils.load_torch_file(ckpt_path, safe_load=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 14, in load_torch_file
sd = safetensors.torch.load_file(ckpt, device=device.type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\safetensors\torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge
I had the same error also and I'm still not 100% sure of the cause BUT for myself, my Get_Model node, which is hidden under the IP Adapter Tile node, disconnected on loading of the workflow.
I ended up removing the efficient loader, replacing it with another clone and manually wired everything, removing the need for the Get_Model, and it started working for me. For some reason, if I tried to just link everything with the original Efficiently loader, the error still happened. I had to replace it to get the error to stop.
Hope this maybe helps you.
@TheP3NGU1NÂ thank you so much, i found the unlink Get_MODEL, and trying to manually reconnect the efficient loader, so do i need to hook up the Get_MODEL as well ? or delete it or just leave it there ?
@coachgarychan210Â After replacing the efficient loader, it seemed unneeded. I deleted mine. just connect model from efficient loader to the ip adapter tile and it should work.
@TheP3NGU1NÂ yes thanks man that worked for me as well
I had a similar problem until I chose the same exact models and loras, controlnet in the description.
Can you tell me what parameters can be changed to animate portrait photos?? I run your pipeline, but the animation practically does not appear as in the examples((
It works but it changes the image of the subject (person) 😂😂
Anyone has the same problem?
to work i have to bypass highres-fix script, but i need highres to get better img, anyone know if this node is with problem?
It would be nice if the author could update the workflow
I FOUND THIS SOLUTION ON ANOTHER IMG2VID WORKFLOW AND WORK FOR ME, JUST FOLLOW WHAT THE GUY SAYS...
yeah high-res fix is broken at the moment. https://github.com/jags111/efficiency-nodes-comfyui/issues/230
the workaround i have been doing:
create a new high-res fix script node
toggle use_controlnet to true
select a controlnet
toggle use_controlnet to false
set use_same_seed to false
This works for me again. just copy over the settings from the non-functional node
I really need a video tutorial or please write down which models and where to put them, the entry threshold to ComFy is very high
I get this error: "Cannot execute because a node is missing the class_type property.: Node ID '#116'"
Can someone help please?:)
Errors with hiresfix script make me sad :(