2.2
For I2V this motion helper node is extremely useful:
https://github.com/princepainter/ComfyUI-PainterI2V
10/30 High lora was further refined.
New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)
https://github.com/VraethrDalkr/ComfyUI-TripleKSampler
T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:
2 steps high noise without the low-step lora at 3.5 CFG
2 steps high noise with lora and 1 CFG
2-4 steps low noise with lora and 1 CFG
Its definitely a big improvement either way.
T2V:
Using their full 'dyno' model for your high model seems best.
"On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."
2.1
7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.
I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.
I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa
No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.
Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models
Old:
lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.
Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.
Description
FAQ
Comments (23)
Just wondering is there any one meet the problem that the character will keep moving their mouth in WAN2.2 I2V with Self-forcing LORA?
It happens when you don't describe anything else. Prompt an emotion e.g. "she is smiling" or "she does not talk at all". Try some different prompts, it's solvable.
in comfyui kijai wf the best way to do this alongside lightx is to use NAG which lets you use a negative prompt, then you can put stuff like talking, singing, chattering into the negative prompt
you need a WanVideo Apply NAG with two WanVideo TextEncodeSingle nodes one for positive and one for negative, increasing nag_alpha to 0.5 or 0.8 will increase the effect of the negative prompt (if you do this config CFG needs to be 1.
@girlsthatdontexist Thanks for your reply! Is adding the negative prompts like talking, chatting to the negative field in WanVideo_TextEncode node take the same effect?
@maxabcr2000709 no not even close
@girlsthatdontexist Nice, I'll give it a try and see if I can use it on my current workflow
@girlsthatdontexist Thanks very much! I've tried to combine WanVideoNAG with Wan2.2 Smooth Workflow! And it worked fine!
The moving mouth issue finally solved!
Self-forcing 14B I2V 480p for Wan 2.1 is no longer available on the online generator for some reason, this is what I was using lately to get more fluid animations. Now I'm f*cked again...
I didn't change anything and it still is all selected as available. Must be a issue on civitai's side
@Ada321 probably. It's easy to check, just copy the hash and try to add it on the online generator, it says 'no models found'. Edit: the hash of your lora is duplicated... now?, the online generator ignores your lora and only shows this one: https://civitai.com/models/1891481?modelVersionId=2140962 , the other day didn't show anything, this is a mess...
@Ada321 When add a resource to a video, this lora doesn't exist on search result, It happened after you add the new t2v version i think. It was fine before.
Yeah can confirm something is fucked for a few days now, I usually add the lightx2v/lightning loras from this Lora post to my uploaded gens and it doesn't appear in the search results at all anymore.
1+
I usually add manually the resources, but since a couple of days ago, I can't relate this to my content when the checkpoint used is Wan Video 2.2 - 14B Image-to-Video | Wan Video Checkpoint | Civitai 😔
I think the problem with the lora not appearing could be because there are no images assigned to the "latest" release (the 9-28 versions)
At least when I click these their pages have no images/videos just your description. I know CivitAI won't even display loras without at least one valid preview image, and it could be that it only checks your latest version for that requirement, ignoring previous version's preview images.
But they do have a preview video each, its the comparison video? And both show as normal, they are not nsfw or anything.
@Ada321 Huh very strange, nothing appears for me for these but all the previous ones work.
https://i.imgur.com/vOdQjV8.png
I don’t know what kind of sorcery this is, but now I can make videos in 60 seconds! Seriously, thanks a ton for your magic touch! lora!
what's your tripleksampler settings?
resolution. 640x640 at 13 frames takes in total a minute or less. in a gtx 4090 ti
The new i2v has been released.
I uploaded it the day it was released but apparently civitai did not like the video I used as a preview? It was 100% sfw. So I grabbed a random one someone else made. Should be visible now.
No low noise for this one? I'm confused. Or should I use the high in both samplers?
@dxjaymz No, only a new high noise one with better movement, you can use the regular 2.2 lightning one for low, it works fine.
Details
Files
Wan2.2-Lightning-T2V-0928-low.safetensors
Mirrors
low_noise_model.safetensors
Wan2.2-T2V-A14B-4steps-lora-lownoise.safetensors
Wan2.2-T2V-A14B-4steps-lora-lownoise.safetensors
Wan2.2-T2V-A14B-4steps-lora-lownoise.safetensors
low_noise_model.safetensors
Lightning_Latest_low.safetensors
Wan2.2-Lightning-T2V-0928-low.safetensors
Wan2.2-T2V-A14B-4steps-lora-250928-Low.safetensors
Wan2.2-Lightning-T2V-0928-low.safetensors
low_noise_model.safetensors
Wan2.2-Lightning-T2V-0928-low.safetensors
Wan2.2-Lightning-T2V-0928-low.safetensors
Lightning_Latest_low.safetensors
low-Wan2.2-T2V-A14B-4steps-lora-250928.safetensors
Wan2.2-Lightning low_noise_model.safetensors
Wan2.2-T2V-A14B-4steps-lora-250928_low_noise_model.safetensors
low_noise_model.safetensors
light_low_noise_model.safetensors
Wan2.2-Lightning-T2V-0928-low.safetensors
low_noise_model.safetensors
low_noise_model.safetensors
Wan2.2_T2V_A14B_4steps_low_lora_250928.safetensors
low_noise_model.safetensors