2.2
For I2V this motion helper node is extremely useful:
https://github.com/princepainter/ComfyUI-PainterI2V
10/30 High lora was further refined.
New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)
https://github.com/VraethrDalkr/ComfyUI-TripleKSampler
T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:
2 steps high noise without the low-step lora at 3.5 CFG
2 steps high noise with lora and 1 CFG
2-4 steps low noise with lora and 1 CFG
Its definitely a big improvement either way.
T2V:
Using their full 'dyno' model for your high model seems best.
"On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."
2.1
7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.
I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.
I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa
No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.
Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models
Old:
lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.
Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.
Description
FAQ
Comments (38)
can you explain what you mean by 2 steps without lora? You mean like having 3 samplers?
yes but with new I2V loras you don't need it anymore. They have almost no negative effect on prompt following / motion quality now imo.
Thanks!
Howdy, when I download the newly uploaded "T2V" the file is actually labelled as "I2V".
Ah, my bad, apparently I misnamed it. Both of the new loras are I2V
@Ada321 - cool, all I knew was that something was amiss. Thanks for sharing.
A question, can I use this lora and Pusa V1 also with Fusion X or do I need to have the specific model you link? I use WAN 2.1 for now. I noticed that with Fusion X if I set the parameters you indicate for T2V the characters' skin has a lot of glare. How can I improve this condition, which I don't like very much?
Coming back to say thanks for the new loras and also the information about the tripleKsampler. That sampler has made WAN22 much easier to use and the quality is MUCH better. Whoever came up with that is a godsend!
Hey is the newest one different than this ?
https://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Wan22_Lightx2v
I feel like such a noob, can someone link me a workflow that uses the tripleKsampler?
the repo has the example workflow you can use, or after installing it you can go to "custom_nodes/tripleksampler/example_workflow" folder and drag the json to the comfyui.
Successful run of Triple K Sampler on 3060 w/12gb, looks very decent from a 640x368 gen to FHD topaz upscale, 8 minutes for 49f!
For some reason i keep getting lora key not loaded with 1022 and then the output comes extremely noisy, i read the documentation on the huggingface page and it says we need clip vision hooked? i thought clip vision was no longer needed for Wan 2.2?
? just use a WF from the tripleksampler wf folder
My final verdict :
Using the 1022 high LoRA at 0.5–0.75 strength only on lightx2v 4-step distilled high noise model, gives me the results I want most of the time. The motion appears more dynamic, and prompt adherence is slightly improves when the 1022 LoRA is used alongside the Lightx2v 4-step distilled model.
I also tested the 1022 LoRA with the base WAN 2.2 model, but the results were not as i expected, But it's still good tho if you need less motion.
I'm still using 2 ksampler not triple. Using tripleksampler takes too long on my machine. So i go back using 2 sampler instead cuz it gives me reasonable gen time and quality.
the Lightx2v 1030 GGUF distiled model :
https://huggingface.co/jayn7/WAN2.2-I2V_A14B-DISTILL-LIGHTX2V-4STEP-GGUF/tree/main/high_noise_1030
*Only use the high 1022 LoRA on the high distiled model only and don't crank up the lora strength to 1 if you’re using the distilled model,
the distilled low model already includes the low LoRA. The included LoRA is still the same one from Lightx2v WAN 2.1, so it’s not a big deal. If you add the 1022 low LoRA to the low distiled model, it’ll just increase generation time without much improvement, and if you use lora strength above 0.8 on high distiled model, the consistency (ie. face ) would start to change from the original image.
what workflow? 3 or 2 sampler?
So you are using only the low noise distiled model and the normal high noise gguf and add the 1022 Lora on high with lower strength? Because that seems to give me the "best" results so far.
@3dasdman302 2 sampler
@Lora_Addict No i use High distiled model + 1022 lora (lower strength) and low distiled model without 1022 lora. Both model are distiled model.
@fronyax Thank you! Yes, that seems to work perfectly!
thanks for this info, do u have any suggestions on new release and is it possible to use the character lora trained on t2v base model with your suggestion distill model to have more consistency?
Which Lora high or low is compatible with Wan 2.1 bcoz wan2.1 doesn't have low and high
In the new 1022 version, I’m getting a “LoRA key not loaded” error.
Is there any way to fix this?
Guess I need to put it in the notes as ive answered 3 comments already, its fine, it just contains extra modulation keys that comfyui does not use.
@Ada321 Oh, I see. The error appears, but it seems to work fine, so I guess it’s not a major issue. Thank you.
How do you guys avoid the prompted action being interrupted after just a couple seconds then returning to the initial point as if it tried to make a loop?
Using WAN2.2-I2V_A14B-DISTILL-LIGHTX2V-4STEP-GGUF
with the high noise version of this 1022 LORA at .7 + other LORAs combined, length 169 with 24 FPS cfg 1 steps 4 euler simple in both samplers.
That's the limitation of wan model, if you generate more than 81 frames (5sec) it tends to loop back, just use the recomended setup for wan which is 16 fps and 81 frames.
If you want to extend the video, grab the last frame of the video and generate from that image with img2vid.
@fronyax Yes been doing that, was wondering if is possible to do it in one single generation without the need to stitch vids though.
@fronyax Wow
81 is wan's limit but the new svi https://civitai.com/models/2066358?modelVersionId=2350967 can help with coherence stringing longer clips together.
Also its in the works but for T2V holocine does the best overall job at longer videos: https://github.com/kijai/ComfyUI-WanVideoWrapper/pull/1566
@Ada321 that svi is interesting, is there native node implementation of it?? currently it's using kijai's node?
@Ada321 I checked the WFs and its for WAN 2.1 480p is this doable with WAN 2.2?
@etherloth somewhat yes, the swimming one should have its WF in it, it was made with 2.2
You can use context options node for WanVideoWrapper. Native implementation of context options isn't good unfortunately. Most of my generations turn out coherent if the scene isn't too busy. I've gone as high as 329 frames at 16 fps for around twenty seconds using 512x896. It processes the video in batches of 81 frames. Worth trying. With q6, I was hitting 19gb vram on i2v.
@solss_ Will look into this!
Details
Files
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
Mirrors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
2.2_lightning1022-22Octuber2025-High.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
H_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
2.2_lightning1022-22Octuber2025-High.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
lightx2v_I2V_HIGH_rank64_bf16.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors