Simple workflow to animate a still image with IP adapter.
Versions
v4 - Removed efficient nodes
v3 - updated broken node
v2 - updated to latest controlnets
Using Topaz Video AI to upscale all my videos.
Models used:
AnimateLCM_sd15_t2v.ckpt
https://huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v.ckpt
AnimateLCM_sd15_t2v_lora.safetensors
https://huggingface.co/wangfuyun/AnimateLCM/blob/main/AnimateLCM_sd15_t2v_lora.safetensors
IP-Adapter
https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus_sd15.bin
Clip Vision
https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/blob/main/model.safetensors
SHATTER Motion LoRA
https://civarchive.com/models/312519
Photon LCM
https://civarchive.com/models/306814
Credit to Machine Delusions for the initial LCM workflow that spawned this & Cerspense for dialing in the settings over the past few weeks.
Description
Updated to fix broken node
FAQ
Comments (88)
Found a couple of issues... (granted I'm still new to ComfyUI so take it as you will)
model names at https://huggingface.co/wangfuyun/AnimateLCM/tree/main have changed...
AnimateLCM_sd15_t2v.ckpt
AnimateLCM_sd15_t2v_lora.safetensors
Load AnimateDif LoRA module throws an error when loading the LoRA (it runs if you bypass module) related to this... I think? https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved/issues/260
CLIP Vision Model unclear, seems to work with this one... CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
thanks, yeah you are right they have since updated the names. links fixed!
When you can update it would be appreciated. The models and the loaders dont match up and some of the nodes refuse to kick off for me
Updated the workflow and links to all the updated paths.
How can we get the result closer to the source (ie- the source image we drop into Load Image node ? having a tough time dialing this one in; my results look kinda cool but also kinda like jibberish; I'd prefer the result to look more like my source image.
couple things, a lower denoise in the ksampler (sometimes I go down to .05). you could add in a controlnet like lineart. Sometimes using controlGIF helps keep things together.
@pxlpshr thx, problem is when I add contronlet model to the mix this workflow crashes at the fina l step "Requested to load AutoencoderKL". So contorlnet + AutoencoderKL at the same time cannot fit within my 8 GB Vram GPU :/ is there anyway to 'bake' the controlnet steps in advance or somehow defer the AutoencoderKL till after controlnet is applied?
@rickred are you running the high-res node? you could bypass that and test.
@pxlpshr thx i played with that it can work a bit and also reducing resolution helps too
keep up the good work
@pxlpshr Where to add the controlnet?
@pxlpshr Just wanted to ask, I can have good results by lowering denoise at 0.1 for instance, adding a controlnet lineart, but then I almost have no motion anymore, do you have any suggestion?
Something's wrong with the linked AnimateLCM lora:
Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'AnimateLCM_sd15_t2v_lora.safetensors' contains no temporal keys; it is not a valid motion LoRA!
The workflow seems to work however with other motion loras?
I think you placed this in the motion lora folder? this should be in your regular lora folder. if you use this with a LCM model (photon LCM) I would use a low strength.. maybe .15. if you use a non-LCM model you can use 1 as strength.
Keeps going out of memory and does a few steps at tje ksampler then HIP out of memory trying to allocate like 40gb vram. Any fix to this? Rx5700xt 8gb vram
I got it to work now nvm but now I keep getting pixelated outputs. It runs all good but then the output is pixelated? :(. am I doing something wrong, someone?
This is my workflow complete image causing it: https://filetransfer.io/data-package/dqdgcQVb#link
Install the newest version of xformers, probably from the torch website's repo for AMD if they have one, otherwise you'll have to build it... AMD wrote a HIP version of memory effecient attention that's in the current version. They only tested MI250X so I don't know if it'll help but it's in there.
@GnomeExplorer pip install this? xformers-0.0.25-cp311-cp311-manylinux2014_x86_64.whl Im on linux
@GnomeExplorer Yeah xformers isnt for amd :(
@cocknito The current git repo has HIP code for AMD in it but you'll probably have to build your own wheel, I'm guessing manylinux is either cuda or their CPU-only algorithms.
@cocknito PS it builds relatively easily on Windows despite some weirdness in the scripts that I needed to patch last year when it wasn't supported officially with flash-attention-2 so linux build from github shouldn't hit any problems for you, I think they've included stuff in the build script to try to scale back the job count to fit in system memory and still get everything so you shouldn't have to do anything weird to run it.
@GnomeExplorer Bought a Nvidea card purely for this, waiting for it to arrive now. They better don't release sora next week sigh
@cocknito You made the right choice if this is what you want to do with it, AMD just isn't there. The situation is even worse on Windows because measuring free VRAM on the DirectML backend for torch is doable but a huge hack that requires knowing the total VRAM for the card in advance and reading in percentages from an array returned by an undocumented API it exposes which returns different values depending on the arbitrary block size you pass it.
I did the same last year (although it was mainly for rendering) after realizing AMD was mostly going the empty gesture route with GPU render support just like they have with the ML stuff on consumer cards. Like I mentioned their xformers checkin was MI250X and up. They never got the renderers up to par with the cost of the 7900XTX vs. RTX4090 either, you usually get more than double the performance for a card that's only ~1.5x MSRP for reference versions and maybe $100 more than that for the 2 slot water cooled version when it's not being scalped. The water cooling isn't really needed (the 3 slot cooler on the 7900XTX keeps it cool enough to run indefinitely at almost 3GHz like the 4090. In a workstation having 2-slot cards is important if you want to populate at least half of the PCIe lanes available on something like a threadripper pro since riser cables aren't in PCIe 4.0 spec and won't work right. If they hit MSRP again I plan on buying another 4090 this year and selling the 7900 on ebay, if prices stay high I'll probably just wait for the inevitable price crash when scalpers buy a bunch of 5090s before the holidays and gamers start claiming they're no faster than 3090s (they might not be for the ancient-engine games most people play just like the 4090 wasn't a great upgrade for that for most people) and unpurchased stock shows up at or below retail for a while.
AMD ProRender allows you to distribute rendering across the two brands of cards but it's so much slower on the 7900 that after the overhead of kicking it off it only contributes maybe a 20% decrease in render time at the cost of 100% power draw increase. The problem with ProRender is that they seem to have stopped updating the plugins for everything, so I had to install a weird outdated Blender version to even test it last year. I use Houdini and there's no build for v20 which came out ~5 months ago so that's out. Renderman has the same issue but Pixar is at least on a fairly regular release schedule and the public releases support GPU+CPU render now (everybody is calling that XPU for some reason) so I'll be interested to compare that to Karma XPU.
Now if I was only playing games it would be a different story, it had no problem running everything I tried at 60fps / 4k HDR without any dumb upscaling tricks, even the single day 0 AAA release I tried that supposedly had all kinds of performance issues (Forspoken).
After I posted I checked their timing tool and noticed a bunch of discreptancies on the matrix schedulers on RDNA3... They're actually slower for bf16 and fp16 than manually lowering the matmul to GPU code and running it as a series of packed fp16 instructions... they have the same throughput as the fp32 versions. Because the scheduling hardware distributes the fma instructions needed for wmma I don't think xformers will actually increase performance even on that, and 5700XT didn't have matrix schedulers so it probably won't do much to benefit things except maybe overall memory use. Anyway it doesn't matter now, you'll be able to do a lot with nearly any NVidia card. I tested the beta 0.9 release of the one-step SDXS model the other night with a batch size of 64x512x512 and the generation speed averaged out to 5ms per image on a 4090... they weren't fantastic quality images but I didn't have any instructions for using it properly so a second step might have been enough to fix it and still remain within I'd guess any of the faster 3000 & 4000 series cards will hit speeds well above what's needed to do realtime text/video2video with it once that's worked out. If it's already at 200fps there's a lot of leeway.
i keep having problems with the ipadapter and the clip vision nodes. i alredy have them but they are not showing in the node interfaze. someone can know why it is?
No matter what I do, i just can't seem to load this node. IPAdapterApplyEncoded -->
I've updated comfy, deleted and reinstalled ComfyUI_IPAdapter_plus
Does anyone have any ideas how to fix this?
----------------------------------------
When loading the graph, the following node types were not found:
IPAdapterApplyEncoded
Nodes that have failed to load will show as red on the graph.
I had this exact issue yesterday! What I did was recreate the nodes myself; in my case I had to just add the IPAdapter Embeds node and use that in place of the red node that I had. You may have to replace other nodes too in order to get things to work.
Good luck, and I hope this was helpful!!
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) i have this error and seem to be the same as you.
@mpr9348378 This helped me a little so I'll help a little more :)
@emiliosoiv00784 and anyone else with this issue...
First delete and replace the IPAdapter Encoder with a new node of the same name, it will have different options.
For this node...
ipadpter input = IPAdapter Model Loader node's IPadapter output.
image = Input image node's image output
clip_vision input = Load CLIP Vision node's CLIP_VISION output
Now add the IPAdapter Embeds node with these inputs...
model input = AnimateDiff Loader node's model output
ipadapter input = IPAdapter Model Loader node's IPAdapter output
pos_embed input = The IPAdapter Encoder node you just created's pos_embed output
clip_vision input = Load CLIP Vision node's CLIP_VISION output
Model output goes into your KSampler node's model input
Delete the red node and you should be good to go!
@ckarmor425 I am grateful that I no longer have a compile time error but now I've got a runtime one 😅
I set up ComfyUI for the first time to try to execute this workflow. I imported the workflow JSON, used the ComfyUI Manager to download the missing custom nodes, and then executed the steps you listed. After queueing a prompt, I got this error:
Error occurred when executing ADE_AnimateDiffLoRALoader: expected str, bytes or os.PathLike object, not NoneType
line 36, in load_motion_lora
if not Path(lora_path).is_file():
Seems I'm still missing something fundamental. Anyway, you've helped this much, maybe you're willing to help a little more 😛
IP Adapter had an update that broke all workflows. I have uploaded a new version with latest nodes.
@pxlpshr Thank you OP for fixing this for us who are too dumb to figure it out ourselves 😓
I'm able to run the workflow now! It took several hours and sadly all it spat out was a video of noise :(. Sorry to keep nagging you when you've done so much, but any idea what I might still be missing?
Is there any documentation about how to get started with it? Maybe with example settings?
People got issues running it on AMD cards or is this thread a common issue> https://github.com/comfyanonymous/ComfyUI/issues/3149
People got issues running it on AMD cards or is this thread a common issue> https://github.com/comfyanonymous/ComfyUI/issues/3149
dont you need cuda to run stable diffusion in general? (Nvidia only)
There was no reason to zip the json file. That is all.
unfortunately CIVIT requires a zip/archive
SeargeIntegerPair error?i try updating and pull new one,but still same error. is there any node i can replace it with,sorry mim new to comfy
+1
+1
+1
+1
I found a solution: you can use INT Constant node (add node - KJnodes - constants) in quantity of 2 to set the width and height. Connect these nodes to the Upscale Image node in width and height items respectively.
The original node with the error (SeargeIntegerPair) can be deleted in this case.
@str_bboy903 Then I have "Error occurred when executing VAEEncode: division by zero"
Just delete it and "convert input to widget" for Height and Width.
this workflow goes hard
What is "adStabilizedMotion_stabilizedHigh.pt" supposed to be? It's not available in the files linked
The node called 'frame' going into 'repeat latent bitch' is not recognized by comfyUI. I can't find it. Do you have a link to install the missing node?
Me too
looks like an update killed that node. replaced it in v3
@pxlpshr Thanks for your reply. I'm not sure what you mean by 'v3.' What is the name of this node?
@ben0o v3 version of the workflow. i removed the bad nodes, and replaced with working ones
@pxlpshr still same issue and i'm using v3
@pxlpshr all good! thanks for your help
anyone found that with recent updates the videos no longer loop smoothly?
I'd appreciate if you add a screenshot of the workflow. I don't want to install SeargeSDXL custom nodes, Crystools custom nodes, Use Everywhere custom nodes and Efficiency custom nodes just to look at constants and settings.
Can the old workflow be also open for download? The IPAdapter of the platform I am using has not been updated to the new version of the node. Thank you.
where to put the IP-Adapter bin file
You should create folder named ipadapter in comfyui/models/ folder and put it there
Such a simple and easy to understand workflow. The only one that I have used that just works and I can understand the logic of the workflow structure to make my own edits to it.
Easy to use, really cool
hi how are u? tnx for this module one question?
when i run this model i got this error
When it reaches" node hires "it gives this error
got prompt Failed to validate prompt for output 18: * HighRes-Fix Script 14: - Value not in list: preprocessor: 'CannyEdgePreprocessor' not in ['_'] - Value not in list: control_net_name: 'Control nets\DensePose.safetensors' not in [] - Value not in list: use_controlnet: 'False' not in ['_'] - Value -1 smaller than min of 0: seed - Value not in list: pixel_upscaler: 'ESRGAN\1x-AnimeUndeint-Compact.pth' not in ['ESRGAN_4x.pth'] Output will be ignored Failed to validate prompt for output 10: Output will be ignored Prompt executed in 0.03 seconds
how can I fix this error? or maybe there is a guide on how to use it? 'AnimateLCM_sd15_t2v_lora.safetensors' contains no temporal keys; it is not a valid motion LoRA!
Use shatterAnimatediff_v10.safetensors for motion lora
When loading the graph, the following node types were not found:
Primitive integer [Crystools]
Nodes that have failed to load will show as red on the graph.
Could you please advise?
You can ignore Crystools and use "Integer Constant" instead of missing node.
@richif I am kind of new to this, do I find it in the manager nodes? Btw thank you so much for a respose!
It gives some really nice results, is it possible to increase the lenght of the video? the "frames" node only increases the speed and the higher the amount is the weirder the video becomes, lots of artifacts. And when i set it to more than 100 my memory runs out, i'm using a 4090. Any help would be greatly appreciated as i'm completely new to comfyui xd
Hi! Does anybody know why the HighRes-Fix node isn't working anymore?
Is someone having the same problem and know a fix?
HighRes-Fix Script 14:
- Value -1 smaller than min of 0: seed
- Value not in list: pixel_upscaler: 'ESRGAN\1x-AnimeUndeint-Compact.pth' not in ['4x_NMKD-Siax_200k.pth']
- Value not in list: control_net_name: 'Control nets\DensePose.safetensors' not in ['control_v1p_sd15_qrcode_monster.s
yeah high-res fix is broken at the moment. https://github.com/jags111/efficiency-nodes-comfyui/issues/230
the workaround i have been doing:
create a new high-res fix script node
toggle use_controlnet to true
select a controlnet
toggle use_controlnet to false
set use_same_seed to false
This works for me again. just copy over the settings from the non-functional node
@pxlpshr Thanks a lot for your help!
It worked for me as well!
thanks so much pxlpshr
@pxlpshr Hi! Where do I find 'toggle use_controlnet to true' ?
@debelllg In your High res fix script node, at the bottom, there should be "use_controlnet false." Click on the dot next to false. It will turn blue and say "true".
i'm still having the same problem, creating a new high-res fix script mode didnt fix it for me. there is no controlnet toggle
for highresfix : duplicate the node then on the old one use fix node recreate. then untoggle the seed node and put 0 in stead of -1 then toggle back. put same values as duplicated node. worked for me. thanks for this great workflow
It's the seed. Random seed = "-1". I fixed it by typing in the seed from the Ksampler
getting bellow error i tried alot but not working can any one help me ..
[AnimateDiffEvo] - INFO - Loading motion module AnimateLCM_sd15_t2v.ckpt
[AnimateDiffEvo] - INFO - Loading motion LoRA pxlpshr_shatter_400.safetensors
!!! Exception during processing !!! Error while deserializing header: HeaderTooLarge
Traceback (most recent call last):
File "/home/ubuntu/ai/ComfyUI/execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/ai/ComfyUI/execution.py", line 192, in get_output_data
return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/ai/ComfyUI/execution.py", line 169, in mapnode_over_list
process_inputs(input_dict, i)
File "/home/ubuntu/ai/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/ai/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/nodes_gen1.py", line 146, in load_mm_and_inject_params
motion_model = load_motion_module_gen1(model_name, model, motion_lora=motion_lora, motion_model_settings=motion_model_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/ai/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_injection.py", line 1280, in load_motion_module_gen1
load_motion_lora_as_patches(motion_model, lora)
File "/home/ubuntu/ai/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/model_injection.py", line 1209, in load_motion_lora_as_patches
state_dict = comfy.utils.load_torch_file(lora_path)
If you're going to go through the trouble of creating and sharing the workflow would it really be so much trouble to properly document what folders the models need to go into? Specifically the fact that the LORA needs to go into \custom_nodes\ComfyUI-AnimateDiff-Evolved\models - you're the author of the linked motion LORA as well, and you don't even note it in that description.
these are common paths for comfyui. if you need any help, just ask. Purz has a great youtube channel for learning how to setup comfyui.
where do all the different links save exactly? pls help :')
hi how are u? tnx for this module one question?
when i run this model i got this error
When it reaches "node hires" it gives this error
got prompt Failed to validate prompt for output 18: * HighRes-Fix Script 14: - Value not in list: preprocessor: 'CannyEdgePreprocessor' not in ['_'] - Value not in list: control_net_name: 'Control nets\DensePose.safetensors' not in [] - Value not in list: use_controlnet: 'False' not in ['_'] - Value -1 smaller than min of 0: seed - Value not in list: pixel_upscaler: 'ESRGAN\1x-AnimeUndeint-Compact.pth' not in ['ESRGAN_4x.pth'] Output will be ignored Failed to validate prompt for output 10: Output will be ignored Prompt executed in 0.03 seconds
hi how are u? tnx for this module one question?
when i run this model i got this error
When it reaches" node hires "it gives this error
got prompt Failed to validate prompt for output 18: * HighRes-Fix Script 14: - Value not in list: preprocessor: 'CannyEdgePreprocessor' not in ['_'] - Value not in list: control_net_name: 'Control nets\DensePose.safetensors' not in [] - Value not in list: use_controlnet: 'False' not in ['_'] - Value -1 smaller than min of 0: seed - Value not in list: pixel_upscaler: 'ESRGAN\1x-AnimeUndeint-Compact.pth' not in ['ESRGAN_4x.pth'] Output will be ignored Failed to validate prompt for output 10: Output will be ignored Prompt executed in 0.03 seconds
How did you stabilize the video? I took a screenshot of the gears, processed it (After fixing up the nodes so it would load without changing your v3 script). The gears wobble and melt all over the place, can you provide the sample input image to test with?
lower denoise?