CivArchive

    2.2

    For I2V this motion helper node is extremely useful:

    https://github.com/princepainter/ComfyUI-PainterI2V

    10/30 High lora was further refined.

    New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)

    https://github.com/VraethrDalkr/ComfyUI-TripleKSampler

    T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:

    2 steps high noise without the low-step lora at 3.5 CFG

    2 steps high noise with lora and 1 CFG

    2-4 steps low noise with lora and 1 CFG

    Its definitely a big improvement either way.

    T2V:

    Using their full 'dyno' model for your high model seems best.

    "On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."

    https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-250928-dyno/Wan2.2-T2V-A14B-4steps-250928-dyno-high-lightx2v.safetensors


    2.1

    7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.

    I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.

    I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa

    No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.

    Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models


    Old:
    lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.


    Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.

    Description

    FAQ

    Comments (25)

    qekAug 4, 2025· 5 reactions
    CivitAI
    itaskyAug 4, 2025

    T2V only

    Ada321
    Author
    Aug 4, 2025· 1 reaction

    itasky Yea, I've been playing with it for hours, I got ok ish gens with the WF I just posted it but still a big loss in motion just like old T2V loras on I2V. Hopefully I2V version is soon.

    lost_moonAug 4, 2025

    I either get slow motion flickering or super fast animations. No in between. Edit: got it to work. had some wrong settings. like too high cfg, wrong steps etc.

    Ada321
    Author
    Aug 5, 2025· 1 reaction

    Ok, I got it working pretty well now for I2V, I put a new WF.

    SUVO_RAWAug 5, 2025
    CivitAI

    After using this workflow and installing custom nodes i have error and sage does not work anymore WTF?

    Ada321
    Author
    Aug 5, 2025

    What is the error? It should not effect anything like that.

    SUVO_RAWAug 6, 2025

    Ada321 I thought so, but definitely the installation of custom nodes has broken everything. I don't know why, but I have already fixed everything. Maybe it's something to do with the Triton version.

    Workflow is very powerful in 480p. I have already experimented with different combinations of strings, from 4 steps to 30, with LORA kicks in at different times and with different power. Also, the CFG 2, 2 seems to be the best starting point. However, it is more on the slow side.

    BUT I can't get it to work normally at 720P resolution. The native workflow, even with FP8 models, runs without problems and takes about 6 to 8 minutes for about 97 frames. In the Wrapper workflow, I can barely start generating at 720. It either freezes or takes over 25 minutes. And the first sample is very slow, about 3.5 minutes.

    Any suggestions as to why that is?

    Ada321
    Author
    Aug 6, 2025

    SUVO_RAW The loras are only for 480P atm which is 832 x 480, 480 x 832 and 832 x 832. Also normally you use 1 cfg

    KiefstormAug 8, 2025

    Ada321 Where did you get the info that these are only 480? I used with 720 just fine. If you mean bc the model info on Civitai says that it's because you have to select either 480 or 720 when you upload. It looks like it was just trained on wan2.2 14b I2V to me

    TeosKuzenAug 5, 2025
    CivitAI

    How are you guys doing with the dynamics and power of action in i2v generation when using LightX? Is it probably impossible to maintain the power of actions with CFG=1 through the standard GUI?

    Ada321
    Author
    Aug 5, 2025· 2 reactions

    Try new WF, new lora combo that works extremely well. https://files.catbox.moe/6lp32g.json

    TeosKuzenAug 5, 2025

    Ada321 I don't know comfyUI very well, where do I learn how to use it?

    KiefstormAug 7, 2025

    TeosKuzen where do you learn how to use comfyUI? Try youtube

    KiefstormAug 7, 2025

    Ada321 the workflow you linked isn't using this new I2v lightning btw.

    KiefstormAug 7, 2025

    Ada321 I tried the workflow you posted and put some loras in that matched the image, set the action lora to strength 2 and typed an unrelated prompt. The prompt was able to override the lora/img at cfg 1.0 so that was cool. The workflow seems to create good quality videos without ruining the motion. I turn off lightning on HIGH noise and turn it on 1.0 for low noise and that works well

    TeosKuzenAug 9, 2025

    Guys, do I understand correctly that the standard GUI in Pinokio is crap? And do I have to switch to ComfyUI to be cool like you?

    gambikules858Aug 5, 2025
    CivitAI

    what does rank32 64 128 mean ? what is the difference?

    KiefstormAug 7, 2025· 1 reaction

    If you plug that same question into google you will get a detailed answer, but basically the larger rank file will be a larger file size and a little bit better

    gambikules858Aug 7, 2025

    Kiefstorm thx

    Lora_AddictAug 5, 2025· 1 reaction
    CivitAI

    The movement is really good with the latest workflow!

    RealmodelsAug 5, 2025
    CivitAI

    Why your WF and WF embedded in your samples not using your high / low models?

    Ada321
    Author
    Aug 5, 2025· 1 reaction

    Because after testing extensively the new lightning models are terrible for I2V which is what most people here use lets be honest. So I have a WF that works best for that.

    RealmodelsAug 6, 2025

    Ada321 yes agree

    lhhhhlqq472Aug 6, 2025
    CivitAI

    anyone make it for googlecolab please

    LORA
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    3,387
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/4/2025
    Updated
    5/14/2026
    Deleted
    -

    Files

    Wan2.2-Lightning_T2V-v1.1-A14B-4steps-lora_LOW_fp16.safetensors

    Mirrors

    HuggingFace (35 mirrors)

    Wan2.2-Lightning_T2V-A14B-4steps-lora_LOW_fp16.safetensors

    Mirrors