2.2
For I2V this motion helper node is extremely useful:
https://github.com/princepainter/ComfyUI-PainterI2V
10/30 High lora was further refined.
New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)
https://github.com/VraethrDalkr/ComfyUI-TripleKSampler
T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:
2 steps high noise without the low-step lora at 3.5 CFG
2 steps high noise with lora and 1 CFG
2-4 steps low noise with lora and 1 CFG
Its definitely a big improvement either way.
T2V:
Using their full 'dyno' model for your high model seems best.
"On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."
2.1
7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.
I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.
I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa
No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.
Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models
Old:
lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.
Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.
Description
FAQ
Comments (25)
Is the accvid/causvid combo recommended for i2v generations?
Yes, the examples I posted were both image to video. It works for both. It also works great with phantom / vace and the like. You might have to fiddle with cfg depending on what you use, if it "burns" you might have to lower it a little.
@Ada321 How much vram is requried? I got RTX 5080 16 GB
@craftogrammer I mean, I'm generating without this using a 12gb card. You'll be fine
@mobdik17378 I wanted to test both accvid, and causvid lora
You can use causvid with normal CFG to enhance motion and prompt following by alot
Got an example t2v workflow?
!!! Exception during processing !!! The size of tensor a (26520) must match the size of tensor b (26078) at non-singleton dimension 1
accudiv dont work on my setup
it's interesting, i forget to turn on the Caus lora and get normal result at 8 steps on euler_a, beta, 5 cfg, 4 shift, 20 block swap
i have no idea what this is, is it supposed to replace the causevideo lora if so what strength? cause the WF uses the cause video lora
Yes very confusing. Update text says "5/30 Causvid V2" but the file under the Causvid V2 is from 5/15. I think OP mixed something up.
@flex25tb I double checked, its the correct file, do you just mean the published date? Cause I updated the old causvid 14B tab to the new one yesterday.
Yea, its the causvid V2 lora, replace the old causvid with it and use it at 0.5 strength like in the WF.
@Ada321 the lora or the chekpoint
So the new WF is almost the same as using 2 samplers by that "CFG Schedule Float List" node? (half with high CFG and the other half CFG 1?)
(I'm using quantized GGUF but it seems the new WF does not support GGUF so I decided to use 2 KSamplers with the same CFG scales mentioned above)
Basically yea, its just faster.
@Ada321 any chance to make a native node version?
Could you please tell me how you got the two ksamplers wired? I can't get a non-burned video out when I try.
Thanks for this. Took me a while to figure everything out but Im using just the Causvid LORA with about 0.6 in my normal wan workflow with 13 steps and its working nice... with 50% reduction in render time..
Does CauseVid still need CFG = 1 to be benificial?
Is Beta Scheduler still recommended?
I tried the causvid v2 workflow which worked fine, then I added a second Lora and it stalled here: Loading model and applying LoRA weights::
Add the other LoRAs before any speedup hacks like CausVid
@boz255 Tried that just now - no difference. Seems this wf can only handle a single lora. My VRAM without the second Lora is comfortably low (88%).
Details
Files
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Mirrors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors
Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors