2.2
For I2V this motion helper node is extremely useful:
https://github.com/princepainter/ComfyUI-PainterI2V
10/30 High lora was further refined.
New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)
https://github.com/VraethrDalkr/ComfyUI-TripleKSampler
T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:
2 steps high noise without the low-step lora at 3.5 CFG
2 steps high noise with lora and 1 CFG
2-4 steps low noise with lora and 1 CFG
Its definitely a big improvement either way.
T2V:
Using their full 'dyno' model for your high model seems best.
"On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."
2.1
7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.
I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.
I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa
No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.
Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models
Old:
lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.
Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.
Description
FAQ
Comments (60)
2.2 lightning just got an update, they add the i2v version too.
you know what is still missing? a ltx2vid but for the 5b model, just saying, maybe you could be the first to create it,
They've already listed 5B on their todo list in the repo. But because its already pretty fast they're probably releasing it last.
Guys, if it makes sense to install 2.2? I'm more messing with i2v, but apparently 2.2 is a heavy 720p model?
It does both 480P and 720P (though the lighting / self-forcing loras are only trained for 480P and perform best at 832 x 480, 480 x 832, 832 x 832 or something close to those.). And its 2 passes of the same sized model. One for movement and one for detail. If you have the ram to swap both its actually faster and uses the same amount of vram. And the quality difference and prompt following ability is night and day better than 2.1
Ada321 So in general, you can bet, right? I'm still working on the standard GUI and I'm new to this whole thing. I'm still practicing i2v generation. Is it true that i2v works better on a 2.2 WAN? Because on 2.1, very often the original image was broken or smeared.
Has anyone managed to overcome the threshold of steps = 4 inference steps?
its giving the error "Job Error list index out of range" in mage.space import job, any tips to fix?
How can I use the two LORAs in SwarmUI? I can load the wan2.2 High Noise model and switch it to the Low Noise model in 3 steps, but I don't know how to assign one LORA to one model and another LORA to the other model.
Reading the official documentation is always a good start ;)
https://github.com/mcmonkeyprojects/SwarmUI/blob/master/docs/Video%20Model%20Support.md#wan-22
Thanks, works out of the box without wrapper workflow.
you have a Workflow handy? I seem to be struggling with this guy
xuadamux373 I use Academia's workflow. Check his channel on YT.
are the 2.2 Lightning loras posted here the same as:
https://huggingface.co/lightx2v/Wan2.2-Lightning
?
No, they are the Kijai extracted ones: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Wan22-Lightning
@Ada321 Exctrated to improve quality, speed or vram usage?
Anyone encountered error "lora key not loaded" while loading the self forcing i2v?
The self forcing lora just does that in the console. Has no effect on the video as far as I can tell, I mean beside it generally slowing down animation to that slow-mo effect vs just native WAN.
From my experience, the new Lightning Wan2.2 self-force lora doesn't do that, but the old one does that as @CAIdonni mentioned. Based on my limited understanding, it is due to the fact that these accelerator LoRAs are not trained with image keys. So when the console looks for an image key, it just isn't there. Luckily, it doesn't affect the process and it will still be used to generate the video.
TorstenTheNord i see, thx
Any idea how to control the steps with the new KSampler? I’m using 4 steps, With 0.9 boundary (i2v) and it makes the high model run only 1 step and the low model run 3 steps. How can I make it run 2 steps for the high model and 2 steps for the low model?
--Update--
Lower the boundary to 0.8 and now 2/2
im having a really hard time using other loras with this. it seems like no matter what i set the lora to it only adheres to it as if it is set to .1. while using this lora along side it. is this normal is there specific loras that work with this? im using wan2.2 i2v14bgguf model
the q8 version
Can you tell me more?
TeosKuzen yes like for example when I try to do ultimatedeepthroat lora for example it will play out like a boomorang video sometimes and the character just sucks the tip instead of going all the way and. I wondering if you are able to get loras to fully function the same as they would without this speed up lora
@aigenie Yes, I get it. I'm still struggling with this problem too. I even manage to generate 480p video in two steps with relatively good quality on the standard Wan2.1 GUI. But the dynamics of movement suffers greatly. I once caught a glimpse at the discussions of Lora FusionX, it seems like there is a node in comfyUI that normalizes CFG. We'll probably have to try to sort something out in this direction. Have you tried to work with Wan 2.2? But not yet
@TeosKuzen yes i am talking about with 2.2
@aigenie Oh, got it. I can't check 2.2 yet. But do the old Loras work in style? For some reason, I thought that Lora, which was usually installed on 2.1, would not clearly work on 2.2? Or is this not the case? I can give you approximate settings with a JSON example from the standard GUI for version 2.1 with more or less live dynamics. If necessary. I just don't know if it's suitable for 2.2. How will I start testing it - if I write anything!!!
@aigenie Do you know any interesting loras to support video detail?
I am having terrible results probably due to wrong settings in ksampler and shift
can someone share or guide me to a workflow for the new t2v lightning lora?
Can you add the workflow again? it's down
Self-Forcing I2V R128 is perfect thing ✌️
What's the difference between self-forcing and lightning ? can anyone smart elaborate?
Loaded this up in my favorite workflow but it's not doing anything faster nor better. Can't tell the difference of what its supposed to do.
Were you already using another speed lora? This increases speed by reducing the number of required steps. Using speed lora can reduce motion and adherence
thanks, but the 2.2 workflow link is broken/ gives empty file.
Reuploaded it: https://files.catbox.moe/bfgbof.json
@Ada321 Thx a lot, man!
I get error This site can’t be reached I don't know whyyyyyyyyyyyyyyyyyyyyy !!!!!!!
3 Stage workflow link is not opening @Ada321
30 minutes for 512x512 is too long I guess lightning loras is good.
after 1 hour I got wan 2.1 1.3b result thank you.
Your loras woking outstanding with Olivio Sarikas his channel on youtube. workflow Wan 2.2 Lightning Olivio.json
Pls, fix your link in the description - it's broken:
https://files.catbox.moe/bfgbof.json
As a new to this, a description of what this lora does would be great.
It allows you to generate Wan videos much faster and with fewer steps needed.
Will this work with 720p resolutions? IKIK, first world problems..
None of them are trained for 720P, you wont get good motion.
@Ada321 also my lora looks like shit when i try 1280x720
The naming of these loras are all over the place!
Selfforcing for example :/
Yeah, i agree. I feel like nerds really get a kick out of jargon and try to invent as much as possible. It's annoying. But in the case of "self forcing", the general idea is that the generation reinforces the prompt at every step across the generation. So it winds up becoming a more amped up version of the prompt at the end of the generation as opposed to only lightly adhering to the prompt. This can create more dynamic end products.
That's my understanding, anyhow.
@jazgalaxy581 That is the first time this naming has been understandable to me, thanks.
@pufferjacketeven475 "Wan21_I2V_14B_lightx2v_cfg_step_distill_lora_rank64_fixed.safetensors" <- The self forcing file... About everything and his mother is mentioned but no the actual self forcing 🤣
Its just what the creators named it.
Yeah, it took me a while to figure out lightx2v and self-forcing are the same thing. Or are they? Still not really sure. Then there's "lightning" which is also lightx2v? Then there's Kijai's versions, And FusionX which has lightx2v or self-forcing integrated, and there's causvid and accvid...😣
@GayLizardSpy No. As far as I understand, Lightx2v and self forcing are not the same thing.
The "t2v" wherever you see it, means "text to video". Or i2v meaning image to video.
In the case of "lightning", it's a... ugh... quantized(?) version of the file that will speed up the rendering by using a more focused model.
as far as I understand.
@jazgalaxy581 naming things is hard, It more than likely makes sense in the context of machine learning.
@GayLizardSpy Exactly :)
T2V and I2V Lightning 2509 are coming boys.
I2V when?
@Unscathed7928 Lightx2v said it's soon, NO date announce tho
Details
Files
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Mirrors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
2.2_lightning-7August2025-High.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
2.2_lightning-7August2025-High.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Kijai_Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan22_I2V_HIGH_Lightning_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors