CivArchive

    2.2

    For I2V this motion helper node is extremely useful:

    https://github.com/princepainter/ComfyUI-PainterI2V

    10/30 High lora was further refined.

    New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)

    https://github.com/VraethrDalkr/ComfyUI-TripleKSampler

    T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:

    2 steps high noise without the low-step lora at 3.5 CFG

    2 steps high noise with lora and 1 CFG

    2-4 steps low noise with lora and 1 CFG

    Its definitely a big improvement either way.

    T2V:

    Using their full 'dyno' model for your high model seems best.

    "On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."

    https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-250928-dyno/Wan2.2-T2V-A14B-4steps-250928-dyno-high-lightx2v.safetensors


    2.1

    7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.

    I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.

    I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa

    No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.

    Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models


    Old:
    lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.


    Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.

    Description

    FAQ

    Comments (60)

    fronyaxAug 7, 2025· 4 reactions
    CivitAI

    2.2 lightning just got an update, they add the i2v version too.

    H_for_HiAug 7, 2025· 1 reaction
    CivitAI

    you know what is still missing? a ltx2vid but for the 5b model, just saying, maybe you could be the first to create it,

    lailandAug 7, 2025· 2 reactions

    They've already listed 5B on their todo list in the repo. But because its already pretty fast they're probably releasing it last.

    TeosKuzenAug 7, 2025
    CivitAI

    Guys, if it makes sense to install 2.2? I'm more messing with i2v, but apparently 2.2 is a heavy 720p model?

    Ada321
    Author
    Aug 7, 2025· 5 reactions

    It does both 480P and 720P (though the lighting / self-forcing loras are only trained for 480P and perform best at 832 x 480, 480 x 832, 832 x 832 or something close to those.). And its 2 passes of the same sized model. One for movement and one for detail. If you have the ram to swap both its actually faster and uses the same amount of vram. And the quality difference and prompt following ability is night and day better than 2.1

    TeosKuzenAug 9, 2025

    Ada321 So in general, you can bet, right? I'm still working on the standard GUI and I'm new to this whole thing. I'm still practicing i2v generation. Is it true that i2v works better on a 2.2 WAN? Because on 2.1, very often the original image was broken or smeared.

    TeosKuzenAug 7, 2025
    CivitAI

    Has anyone managed to overcome the threshold of steps = 4 inference steps?

    sethturkss663Aug 7, 2025
    CivitAI

    its giving the error "Job Error list index out of range" in mage.space import job, any tips to fix?

    StawyAug 7, 2025
    CivitAI

    How can I use the two LORAs in SwarmUI? I can load the wan2.2 High Noise model and switch it to the Low Noise model in 3 steps, but I don't know how to assign one LORA to one model and another LORA to the other model.

    Lora_AddictAug 10, 2025
    NFTGamer666Aug 8, 2025
    CivitAI

    Thanks, works out of the box without wrapper workflow.

    datlurkaaAug 9, 2025

    you have a Workflow handy? I seem to be struggling with this guy

    NFTGamer666Aug 10, 2025

    xuadamux373 I use Academia's workflow. Check his channel on YT.

    ai_fireAug 8, 2025
    CivitAI

    are the 2.2 Lightning loras posted here the same as:
    https://huggingface.co/lightx2v/Wan2.2-Lightning
    ?

    Ada321
    Author
    Aug 9, 2025
    HappyBotSep 24, 2025

    @Ada321 Exctrated to improve quality, speed or vram usage?

    ZlunlewAug 9, 2025
    CivitAI

    Anyone encountered error "lora key not loaded" while loading the self forcing i2v?

    CAldonniAug 10, 2025· 1 reaction

    The self forcing lora just does that in the console. Has no effect on the video as far as I can tell, I mean beside it generally slowing down animation to that slow-mo effect vs just native WAN.

    TorstenTheNordAug 13, 2025

    From my experience, the new Lightning Wan2.2 self-force lora doesn't do that, but the old one does that as @CAIdonni mentioned. Based on my limited understanding, it is due to the fact that these accelerator LoRAs are not trained with image keys. So when the console looks for an image key, it just isn't there. Luckily, it doesn't affect the process and it will still be used to generate the video.

    ZlunlewAug 16, 2025

    TorstenTheNord i see, thx

    fronyaxAug 14, 2025
    CivitAI

    Any idea how to control the steps with the new KSampler? I’m using 4 steps, With 0.9 boundary (i2v) and it makes the high model run only 1 step and the low model run 3 steps. How can I make it run 2 steps for the high model and 2 steps for the low model?

    --Update--

    Lower the boundary to 0.8 and now 2/2

    aigenieAug 15, 2025
    CivitAI

    im having a really hard time using other loras with this. it seems like no matter what i set the lora to it only adheres to it as if it is set to .1. while using this lora along side it. is this normal is there specific loras that work with this? im using wan2.2 i2v14bgguf model

    the q8 version

    TeosKuzenAug 16, 2025

    Can you tell me more?

    aigenieAug 20, 2025

    TeosKuzen yes like for example when I try to do ultimatedeepthroat lora for example it will play out like a boomorang video sometimes and the character just sucks the tip instead of going all the way and. I wondering if you are able to get loras to fully function the same as they would without this speed up lora

    TeosKuzenAug 23, 2025

    @aigenie Yes, I get it. I'm still struggling with this problem too. I even manage to generate 480p video in two steps with relatively good quality on the standard Wan2.1 GUI. But the dynamics of movement suffers greatly. I once caught a glimpse at the discussions of Lora FusionX, it seems like there is a node in comfyUI that normalizes CFG. We'll probably have to try to sort something out in this direction. Have you tried to work with Wan 2.2? But not yet

    aigenieAug 23, 2025

    @TeosKuzen yes i am talking about with 2.2

    TeosKuzenAug 24, 2025

    @aigenie Oh, got it. I can't check 2.2 yet. But do the old Loras work in style? For some reason, I thought that Lora, which was usually installed on 2.1, would not clearly work on 2.2? Or is this not the case? I can give you approximate settings with a JSON example from the standard GUI for version 2.1 with more or less live dynamics. If necessary. I just don't know if it's suitable for 2.2. How will I start testing it - if I write anything!!! 

    TeosKuzenAug 24, 2025

    @aigenie Do you know any interesting loras to support video detail?

    espinozaaAug 19, 2025
    CivitAI

    I am having terrible results probably due to wrong settings in ksampler and shift

    can someone share or guide me to a workflow for the new t2v lightning lora?

    mnexus7Aug 21, 2025· 10 reactions
    CivitAI

    Can you add the workflow again? it's down

    mistporyvaevAug 21, 2025· 2 reactions
    CivitAI

    Self-Forcing I2V R128 is perfect thing ✌️

    karaddsssAug 23, 2025· 8 reactions
    CivitAI

    What's the difference between self-forcing and lightning ? can anyone smart elaborate?

    randomchatter1234776Aug 23, 2025· 1 reaction
    CivitAI

    Loaded this up in my favorite workflow but it's not doing anything faster nor better. Can't tell the difference of what its supposed to do.

    ToxicBotAug 23, 2025

    Were you already using another speed lora? This increases speed by reducing the number of required steps. Using speed lora can reduce motion and adherence

    SillairArtAug 25, 2025· 7 reactions
    CivitAI

    thanks, but the 2.2 workflow link is broken/ gives empty file.

    Ada321
    Author
    Aug 26, 2025· 1 reaction
    SillairArtAug 26, 2025

    @Ada321 Thx a lot, man!

    aneebartist683Aug 26, 2025

    I get error This site can’t be reached I don't know whyyyyyyyyyyyyyyyyyyyyy !!!!!!!

    aneebartist683Aug 25, 2025· 2 reactions
    CivitAI

    3 Stage workflow link is not opening @Ada321

    aneebartist683Aug 25, 2025

    30 minutes for 512x512 is too long I guess lightning loras is good.

    aneebartist683Aug 25, 2025

    after 1 hour I got wan 2.1 1.3b result thank you.

    aneebartist683Aug 25, 2025· 7 reactions
    CivitAI

    Your loras woking outstanding with Olivio Sarikas his channel on youtube. workflow Wan 2.2 Lightning Olivio.json

    lukusali911Aug 29, 2025· 5 reactions
    CivitAI

    Pls, fix your link in the description - it's broken:
    https://files.catbox.moe/bfgbof.json

    FaithlessnessBest435114Sep 6, 2025· 8 reactions
    CivitAI

    As a new to this, a description of what this lora does would be great.

    GayLizardSpyOct 18, 2025

    It allows you to generate Wan videos much faster and with fewer steps needed.

    manchuwookSep 7, 2025
    CivitAI

    Will this work with 720p resolutions? IKIK, first world problems..

    Ada321
    Author
    Sep 14, 2025

    None of them are trained for 720P, you wont get good motion.

    zoroofcalls378Sep 14, 2025

    @Ada321 also my lora looks like shit when i try 1280x720

    kallamamranSep 9, 2025· 10 reactions
    CivitAI

    The naming of these loras are all over the place!

    Selfforcing for example :/

    jazgalaxy581Sep 10, 2025· 4 reactions

    Yeah, i agree. I feel like nerds really get a kick out of jargon and try to invent as much as possible. It's annoying. But in the case of "self forcing", the general idea is that the generation reinforces the prompt at every step across the generation. So it winds up becoming a more amped up version of the prompt at the end of the generation as opposed to only lightly adhering to the prompt. This can create more dynamic end products.

    That's my understanding, anyhow.

    pufferjacketeven475Sep 29, 2025· 1 reaction

    @jazgalaxy581 That is the first time this naming has been understandable to me, thanks.

    kallamamranOct 1, 2025· 2 reactions

    @pufferjacketeven475 "Wan21_I2V_14B_lightx2v_cfg_step_distill_lora_rank64_fixed.safetensors" <- The self forcing file... About everything and his mother is mentioned but no the actual self forcing 🤣

    Ada321
    Author
    Oct 6, 2025

    Its just what the creators named it.

    GayLizardSpyOct 17, 2025

    Yeah, it took me a while to figure out lightx2v and self-forcing are the same thing. Or are they? Still not really sure. Then there's "lightning" which is also lightx2v? Then there's Kijai's versions, And FusionX which has lightx2v or self-forcing integrated, and there's causvid and accvid...😣

    jazgalaxy581Oct 17, 2025

    @GayLizardSpy No. As far as I understand, Lightx2v and self forcing are not the same thing.

    The "t2v" wherever you see it, means "text to video". Or i2v meaning image to video.

    In the case of "lightning", it's a... ugh... quantized(?) version of the file that will speed up the rendering by using a more focused model.

    as far as I understand.

    borkborkborkOct 17, 2025

    @jazgalaxy581 naming things is hard, It more than likely makes sense in the context of machine learning.

    kallamamranOct 20, 2025

    @GayLizardSpy Exactly :)

    fronyaxSep 28, 2025· 4 reactions
    CivitAI

    T2V and I2V Lightning 2509 are coming boys.

    Unscathed7928Sep 29, 2025

    I2V when?

    fronyaxOct 1, 2025

    @Unscathed7928 Lightx2v said it's soon, NO date announce tho

    LORA
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    5,811
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/7/2025
    Updated
    5/14/2026
    Deleted
    -

    Files

    Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors

    Mirrors

    HuggingFace (61 mirrors)