CivArchive

    2.2

    For I2V this motion helper node is extremely useful:

    https://github.com/princepainter/ComfyUI-PainterI2V

    10/30 High lora was further refined.

    New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)

    https://github.com/VraethrDalkr/ComfyUI-TripleKSampler

    T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:

    2 steps high noise without the low-step lora at 3.5 CFG

    2 steps high noise with lora and 1 CFG

    2-4 steps low noise with lora and 1 CFG

    Its definitely a big improvement either way.

    T2V:

    Using their full 'dyno' model for your high model seems best.

    "On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."

    https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-250928-dyno/Wan2.2-T2V-A14B-4steps-250928-dyno-high-lightx2v.safetensors


    2.1

    7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.

    I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.

    I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa

    No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.

    Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models


    Old:
    lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.


    Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.

    Description

    FAQ

    Comments (148)

    jraces5187Jul 16, 2025
    CivitAI

    Just downloaded the updated IV2 and I'm getting the following error: "Error while loading Loras: Lora 'loras_i2v\\Wan21_I2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors' contains non Lora keys '['blocks.0.diff_m', 'blocks.1.diff_m', 'blocks.10.diff_m', 'blocks.11.diff_m', 'blocks.12.diff_m', 'blocks.13.diff_m', 'blocks.14.diff_m', 'blocks.15.diff_m', 'blocks.16.diff_m', 'blocks.17.diff_m', '...'"

    Ada321
    Author
    Jul 16, 2025

    Are you using it with a 14B 480P image to video model? I tested with both kijai's wrapper and native and it works.

    jraces5187Jul 16, 2025

    @Ada321 Yup. Although I'm using Wan on Pinokio, so that might be the conflict. This is the first time I'm seeing this error, both I2V and T2V presents the same error.

    Ada321
    Author
    Jul 16, 2025· 1 reaction

    Maybe? Apparently the T2V one is having issues for some people: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v/discussions The I2V is working fine though.

    drone2222Jul 16, 2025

    Same issues with WanGP through Pinokio, I'm assuming anyone using WanGP is having the same problem?

    Ada321
    Author
    Jul 16, 2025

    I reuploaded it. Should be fixed.

    drone2222Jul 16, 2025

    @Ada321 Seems to be all good now, thanks for addressing it so quickly, you the goat

    Griphen116Jul 16, 2025· 1 reaction
    CivitAI

    T2V with the new V2 model just give noisy images. Any ideas?
    Same workflow that works with the V1 T2V lora.

    Ada321
    Author
    Jul 16, 2025

    Seems to be a issue others are having atm. I'm gonna hide it for now:
    https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v/discussions

    electricpickleJul 16, 2025

    yeah its the same for me, seems to be broken for now

    Ada321
    Author
    Jul 16, 2025· 1 reaction

    For now the new I2V lora seems to work better than the old T2V lora did even for T2V.

    Griphen116Jul 16, 2025

    Good news, looks like they responded to the thread on HF and fixed the issue with Lora keys. The fixed T2V is posted.

    Ada321
    Author
    Jul 16, 2025

    I reuploaded the fixed version.

    Griphen116Jul 16, 2025

    Actually, looks like they did the same fix for the I2V loras as well, and just uploaded new versions for both.

    electricpickleJul 16, 2025

    Hmmm it still broken for me

    Ada321
    Author
    Jul 16, 2025

    @TheQuacktastic Huh, both it and the old one somehow worked for me.

    electricpickleJul 16, 2025

    @Ada321 Strange, i'll just use the I2V one for now iguess, that one works fine

    Griphen116Jul 16, 2025

    Yeah spoke too soon. Still broken for me.
    The I2V lora doesn't work either. The keys are not being loaded, so nothing is actually being patched to the from the lora.

    Ada321
    Author
    Jul 16, 2025

    @Griphen116 The I2V lora is not working for you? Can you use a bank WF just to make sure? Because it is 100% working for me. I would not be getting the kind of results I have been getting with 4 steps otherwise. Also make sure your comfy is updated.

    flo11ok874Jul 16, 2025

    @Griphen116 @Ada321 Kijai just extract and drop couple of T2V Loras with different ranks https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v

    Ada321
    Author
    Jul 16, 2025· 1 reaction

    Reuploaded again with Kijais

    flo11ok874Jul 16, 2025

    @Ada321 And now Kijay add a lot of new I2v Loras with diff ranks ;]

    dxjaymzJul 16, 2025· 4 reactions
    CivitAI

    I had no luck with the older ones but the new version works great! Speed and great movement!

    relative11Jul 16, 2025

    Do you have any workflow ?

    flo11ok874Jul 16, 2025· 2 reactions

    @So6sson For me Basic workflow with Lora works fine. From comfyUi wiki: 'Wan2.1 Video LoRA Workflow'

    Dumcluck51Jul 16, 2025· 1 reaction

    Nice.. I just used a typical WAN i2v workflow and added this Lora. Super quick and good quality.

    dxjaymzJul 16, 2025

    I just used my normal workflow with this lora, changed the CFG and sampler to LCM...

    dwiwork0123561Jul 16, 2025
    CivitAI

    How is AccVideo LoRA trained? Can you provide a link?

    Dumcluck51Jul 16, 2025· 1 reaction
    CivitAI

    I like the speed and quality but, for me anyway, I'm having trouble getting it to work with Loras. Either it ignores the lora or the prompt or both. And more than one lora kills the quality. Steps = 4, CFG = 1.0, Lcm, simple, denoise = 1.0

    flo11ok874Jul 16, 2025

    Yea it depends of Loras. But try diff strength of this Lora (for exemple 0.6 Lightx2v) and/or other Loras (from 0.9 and under). Sometimes it helps a lot.

    Baka_OppaiJul 17, 2025

    I had great success with .35 on this one, causevid stuff im my experience needs to be set weak compared to everything else

    qekJul 18, 2025

    @dcham2310 Why LCM? I've been using Euler Beta, lightx2v strength 1.0, good quality

    GBRXJul 16, 2025· 9 reactions
    CivitAI

    This is the best i2v solution I've found so far. I'm getting Kling level quality which is something I've been trying to achieve for some time.

    compo6628585Jul 16, 2025· 3 reactions
    CivitAI

    This is now the top dog once again , the motion is muched improved! Thanks for keeping things updated 🤜🤛

    CharlieBrown0115Jul 16, 2025

    @compo6628585 you are talking aboy the selfforcing i2v lora? the first option?

    compo6628585Jul 16, 2025· 1 reaction

    @CharlieBrown0115 thats the one!

    SECoursesJul 16, 2025· 3 reactions
    CivitAI

    text to video generates noise any ideas?

    perfectgfJul 16, 2025
    CivitAI

    What does the r138 version ? Thks for the work by the way

    Ada321
    Author
    Jul 16, 2025· 3 reactions

    Its rank 128, its twice as big but is slightly higher quality. And thank lightx2v / kijai

    AzzyboiJul 21, 2025

    Ada321 does it matter if i use 480p or 720p version, i mean i couldnt find where to download 128rank 720p version

    ravenerkr841Jul 16, 2025· 2 reactions
    CivitAI


    I feel like the Loras are too strong. If I put in any Loras, it feels like they're being applied 10 times stronger. The prompt itself also feels like it's being applied 10 times stronger. The motion is bigger, which is good, but it's not controllable.

    Ada321
    Author
    Jul 16, 2025

    Are you using any cfg? This is supposed to be used with 1.0 cfg.

    ravenerkr841Jul 17, 2025

    Ada321 Yes, I'm using cfg=1, shift=8. I loaded Lora using the 'lora manager' node that I used in my existing workflow. I'll keep researching.

    flo11ok874Jul 17, 2025

    ravenerkr841 Lightx2v works different with different other Loras. Try this Lora at lower strength. I got good result with 0.6 but it depends of your other Loras.

    HikariasJul 16, 2025
    CivitAI

    Wha it self-forcing 14b i2v r128?
    Where did you get it?

    nowantspammoJul 16, 2025· 1 reaction
    ESCANORSJul 17, 2025
    CivitAI

    R128 has some solid improvements in motion handling indeed! Thanks for sharing.

    DeamonizerJul 17, 2025
    CivitAI

    Will it work on wan Vace?

    electricpickleJul 18, 2025· 1 reaction

    yes

    qekJul 18, 2025

    For Wan Vace 14B

    fantafrutasdelbosqueJul 17, 2025· 1 reaction
    CivitAI

    Excellent quality, excellent speed!

    At least in my own workflow, however, setting CFG to 1 also means Comfy won't take a new base image. If, for instance, I run the workflow with base image a.png, then subsequently try with b.png, that second run will behave as if I'm using some garbled ersion of b.png. Anybody know what's going on? And/or, does anyone have a workflow in which this doesn't happen?

    Ponder_StibbonsJul 17, 2025

    We wrote our comments simultaneously it seems. Sounds like we have the same problem. Something is breaking somewhere. I tried purging nodes and unloading ones, nothing working so far.

    Ponder_StibbonsJul 17, 2025

    Bypass teacache and skip layer guidance if you're using them. They're being confounded by the sheer impudence of this LoRa. As would any sane node, one would imagine. They weren't designed for such tomfoolery. Anyway, that's what fixed it for me.

    Ponder_Stibbons Well, that fixed it for me as well. Thanks, friend!

    Ponder_StibbonsJul 17, 2025
    CivitAI

    This is a strange one I can't figure out. This is friggin amazing, really amazing. But it breaks something in comfy every time I use it and I have to restart it. First gen is perfect, and practically instantaneous. It will run a second time, but every gen after the first is a grainy mess, pretty much what a 4 step run would normally be. I can't see anything in the terminal to indicate a problem. Restarting comfy lets me run it fine again, just once. Was wondering if anyone else had seen this. Using 14BI2V, with config listed in the description.

    Ada321
    Author
    Jul 17, 2025

    I haven't seen anything like that before. Try using the default native wan WF to try and single out the issue.

    Ponder_StibbonsJul 17, 2025

    Ada321 I just located the issue. TeaCache is resetting too late and trying to reuse old data. Why this would happen I don't... oh crap. Layer skipping. Why didn't I think of that. This makes sense. There's a whole lot of crap in my workflow that is obviated by the LoRa that I hadn't thought to bypass to begin with. Not needed.... ah that took five seconds...bypassing teacache and skip layer guidance resolves the problem. Shouldn't be a problem enabling on the second stage, as there is actually stuff for the nodes to do with the second sampler.

    Ponder_StibbonsJul 17, 2025· 1 reaction

    I'm surprised that more people didn't have this same problem immediately as well. Seems like it should happen to anyone who enables this LoRa in a memory-optimized workflow. I should have tried an additional VRAM purge first probably, that might have fixed it too, albeit leaving me with superfluous nodes. It seems like the LoRa does some major tinkering to the model that is contaminating the poor VRAM like virtual variola. In any event, ditching the caching and skipping worked, and keeping them on the second stage is fine for a low denoise t2v as long as a different model is used. Despite the description, I'm keeping my cleanup stages. Minor headscratchers aside, this thing really kicks ass. How restrictive it is remains to be seen... but still, wow.

    Ada321
    Author
    Jul 17, 2025

    Ah, yea don't use teacache with this. It would not really give any speed up when only using 4 steps anyways.

    ElnahrJul 17, 2025
    CivitAI

    Works great for motion but I find that the image quality is lower than the FusionX Base models, Seem to work with FusionX + Lora but with a lower lora setting, still trying to find a good balance for Lora strength for motion vs base model for image quality.

    Ada321
    Author
    Jul 17, 2025· 1 reaction

    Probably just the detail and the moviigen lora that is included in fusionx, try using those with this.

    carvangarJul 18, 2025· 5 reactions
    CivitAI

    18 minutes with sage attention and tea cache. Now with this its down to 4 minutes. And no quality drops! Totally crazy lol.

    spoffninjaJul 19, 2025

    Wow thats impressive, mind sharing your workflow? The workflows i have, im seeing an improvement but only by a margin. Sounds like your improvement is vastly different than mine. Quite possibly the worflow as I use the fusionX ones which is a bit of a spaghetti monster with custom nodes.

    SwissCorePyJul 18, 2025· 2 reactions
    CivitAI

    Really impressive. Better quality and faster generation. A 5 second video "only" takes 4-5 minutes with my ancient 2080ti (11GB VRAM).

    Thanks for sharing!

    qdr1enJul 18, 2025· 1 reaction
    CivitAI

    The best combination of Loras/Sampling settings so far!


    Better prompt adherence than native flow, more motion than previous versions, fastest generation speed. Works great.

    arkhan9Jul 18, 2025· 2 reactions
    CivitAI

    I was getting really static i2v results until i used this!! This is so good, the motion is unreal.

    RTX 4070 - 3 minute gens with decent quality!!! This is super fast.

    swanJul 18, 2025· 2 reactions
    CivitAI

    Very nice lora!
    I use Lightx2 together with Causvid because Lightx2 creates a bit of noise.This can achieve both speed and quality.

    dwiwork0123561Jul 21, 2025

    Can you specify which LoRA was used and what these two sets of weights are?

    ColorWolveJul 18, 2025· 6 reactions
    CivitAI

    HolyShit the new version i2v is FUCKING amazing, now this really feel likes free Kling at Home!!

    datlurkaaJul 19, 2025

    Hey - do you happen to have a good workflow for this? I' really curious - haven't tried any self forcing before

    ColorWolveJul 20, 2025· 1 reaction

    xuadamux373  https://civitai.com/images/89299184
    just save my video, you will get the workflow

    datlurkaaJul 20, 2025· 1 reaction

    ColorWolve You’re the best man, gonna check it out when back from work

    Appreciated

    ColorWolveJul 20, 2025· 1 reaction

    xuadamux373 Enjoy~

    datlurkaaJul 20, 2025

    Yo this is wild, my gens went from 30 mins to 5

    Thanks man

    ColorWolveJul 20, 2025· 1 reaction

    xuadamux373 hahaha right, now this what i called kling at home~

    itaskyJul 20, 2025

    ColorWolve why i can't get your workflow on my comfyui?

    ColorWolveJul 21, 2025

    itasky 

    https://civitai.com/images/89299184

    Save this link video, can’t get the workflow? That impossible.. maybe try update your comfyui to latest version?

    itaskyJul 21, 2025

    ColorWolve yep i move your video into comfyui latest version and nothing happens. i tried this external workflow app and doesn't show any wf when i put your video into it. https://comfyui-embedded-workflow-editor.vercel.app/

    ColorWolveJul 21, 2025· 1 reaction

    itasky i have no idea, i try this website you mention and none of the workflow is working, even the workflow from loras sample... i am using portable comfyui

    ColorWolveJul 21, 2025· 2 reactions

    itasky maybe your setting can only read json, so try this https://drive.google.com/file/d/1rdWzKxAsUdDe8nKF6dXNE8FDbsk7RABB/view?usp=sharing

    itaskyJul 21, 2025

    ColorWolve oh now your video it's working. i don't know how i fixed maybe loading a json i donwloaded in other comment. By the way i am missing the finalframeselector node :/

    ColorWolveJul 21, 2025

    itasky i think comfyui manager will showing what missing, so you can download from there

    itaskyJul 21, 2025

    ColorWolve yep i am installing from Mediamixer pack. thanks for your support :)

    ColorWolveJul 21, 2025

    itasky you are welcome =)

    itaskyJul 21, 2025

    ColorWolve wow i reinstalled comfyui on Ubuntu and with your workflow i get the video in 100%|█████████████████████████████████████████████| 6/6 [01:09<00:00, 11.52s/it] :) so much faster than windows (without triton/sage)

    ColorWolveJul 22, 2025

    itasky hahaha i see, but i am not fans of linux, so for me, windows speed is enough

    jonk999Jul 19, 2025
    CivitAI

    What scheduler would you recommend?

    R3G4LJul 19, 2025

    For using Kijai WanVideo Wrapper I would say lcm and lcm/beta is pretty good at 6-8 steps flowmatch_causvid is a good one at 6-8 steps. flowmatch_distill is limited to 4 steps so it doesn't like any fast movement without being blurry. Test the various dpm++ schedulers and steps, they seem to do great.

    R3G4LJul 19, 2025· 1 reaction
    CivitAI

    LightX2V brough the best quality/detail out of Quantized models when I compared Q6 with the FP8 I use. Without a big hit to visuals, remember that GGUF with similar size to FP8 will take longer than FP8 etc, but GGUF can be very useful if you have limited VRAM and use lower quant models than Q6. Every Quant/low VRAM user just got a big upgrade in visual quality/detail/low step(speed) with LightX2V

    Also Kijai WanVideo Wrapper now compatible with GGUF! Another upgrade for quantized models. Have fun!

    DJLegendsJul 20, 2025· 1 reaction

    which GGUF are you using if I don't mind asking

    fonso2Jul 22, 2025

    not the OP but personally I just use City96's Q3_K_M on a 3080 and 4090 rig. Quality is absolutely fine, and the generations are fast. umt5_xxl_fp16.safetensors for the clip.

    R3G4LJul 23, 2025· 1 reaction

    DJLegends Depends on what works best for your setup.

    Q6 for instance can be more resource intensive than a 14b FP8 model. If you can Run Q6 then switch to FP8 with quantization and you will be better off with speed, stability, VRAM usage.

    GGUF model have to be unpacked as they generate which cause them to take longer, but if the Q model is small enough you can negate the extra time it takes. I use FP8 on 12GB VRAM card no issues, just can't do 720p 81 frames. Can do 576p(1024x576) 117 frames no problem and the quality/speed with LightX2V is great. (using Kijai WanVideo Wrapper)

    On i2v I notice if you go high frames around 117 WAN will use your image as a last frame as attempt to loop another generation beyond its limit.

    TakujabaJul 19, 2025
    CivitAI

    Do you use the regular Wan 2.1 model with this lora? I'm very new to V2I and it doesn't work as good for me, as it does for others, it seems

    NewTesterAI574Jul 21, 2025
    CivitAI

    Very great lora. Thank you for sharing. 45 minutes down to around 5 minutes!

    OmifiJul 21, 2025
    CivitAI

    Is there a way to avoid the constant talking and overall ruining of a characters face with the self forcing lora?

    flo11ok874Jul 21, 2025

    NAG and it will follow negative prompts even with cfg-1 (at negative write what you don't want at video)

    BinaryBottleBakeJul 24, 2025

    flo11ok874 do you have a workflow or a guide for using NAG?

    flo11ok874Jul 24, 2025

    BinaryBottleBake It's easy - add 'WanVideoNAG' node with default setting. Put model dot to shift and sampler and conditioning dot to both prompt nodes. Or look at https://civitai.com/models/1736052?modelVersionId=1964792 there is NAG included. It's great workflow btw, just replace old Lightx2v Lora with this new one.

    nsfwVariantJul 27, 2025

    You can also keep faces more stable using this Lora: https://civitai.com/models/1755105/wanfusionxfacenaturalizer

    ZelashZelashJul 22, 2025
    CivitAI

    this works great, i definitely have more movement now!

    but it's still not enough sometimes. if i increase the lora strength to 1.3 or more the movement increases as well but the quality takes a hit.
    i'm thinking maybe use a high strength value for the first 1 or two steps and end up with a strength of 1 for this lora, but i don't know how to do that or if it will help

    Ada321
    Author
    Jul 22, 2025· 1 reaction

    Instead try upweighting your prompt. You could also try using NAG to use negative prompts.

    ZelashZelashJul 22, 2025

    Ada321 yeah but nag increases inference time, im trying to reduce it as much as possible.
    i actually just did something simpler, just put more weight in the parts of the prompt that describes movement, it helps a lot

    Lora_AddictJul 24, 2025

    Try out the new Pusa Lora in addition to this at 1.4 strength. Helps with motion according to multiple people and i can confirm that after a few first tests.

    dwiwork0123561Jul 25, 2025

    marqs89 where is the lora,can you provide a link?

    ZelashZelashJul 26, 2025

    marqs89 i think it doesn't work.
    i get this message in console
    lora key not loaded: blocks.0.cross_attn.k.lora_A.default.weight

    lora key not loaded: blocks.0.cross_attn.k.lora_B.default.weight

    lora key not loaded: blocks.0.cross_attn.o.lora_A.default.weight

    lora key not loaded: blocks.0.cross_attn.o.lora_B.default.weight

    lora key not loaded: blocks.0.cross_attn.q.lora_A.default.weight

    lora key not loaded: blocks.0.cross_attn.q.lora_B.default.weight

    lora key not loaded: blocks.0.cross_attn.v.lora_A.default.weight

    lora key not loaded: blocks.0.cross_attn.v.lora_B.default.weight

    lora key not loaded: blocks.0.ffn.0.lora_A.default.weight

    lora key not loaded: blocks.0.ffn.0.lora_B.default.weight

    lora key not loaded: blocks.0.ffn.2.lora_A.default.weight

    lora key not loaded: blocks.0.ffn.2.lora_B.default.weight

    lora key not loaded: blocks.0.self_attn.k.lora_A.default.weight

    lora key not loaded: blocks.0.self_attn.k.lora_B.default.weight

    lora key not loaded: blocks.0.self_attn.o.lora_A.default.weight

    lora key not loaded: blocks.0.self_attn.o.lora_B.default.weight

    lora key not loaded: blocks.0.self_attn.q.lora_A.default.weight

    lora key not loaded: blocks.0.self_attn.q.lora_B.default.weight

    lora key not loaded: blocks.0.self_attn.v.lora_A.default.weight

    lora key not loaded: blocks.0.self_attn.v.lora_B.default.weight

    the complete message is too long, it goes from 0 to 39 and the generations looks the same, i guess it's not loading it at all.
    i'm using wan i2v 480p GGUF Q8

    pirikiki152Jul 23, 2025
    CivitAI

    Wow that's incredible. Works like a charm really

    mirtmirtJul 23, 2025
    CivitAI

    can you share the new workflow that you talk about without 2 samplers?

    SamethingaiJul 24, 2025· 2 reactions
    CivitAI

    I keep getting outputs that try to form a loop. IE if I start with an image of someone sitting on a chair with the prompt "They stand up and walk away to the right" the result will be them standing up, stuttering a bit, and then sitting back down close to their original position.


    Using wan Q8 ggufs, tried with 480 and 720p with only this lora. Otherwise it's pretty much the default i2v workflow

    LynMSJul 24, 2025
    CivitAI

    I don't know who's behind this lora but thank you so much. It works great. However, there is still a “slow motion” effect in some results, but some keywords or different lora may help.

    I am wondering, Are there any chance to release for ''720p i2v'' version? It still works but it doesn't understand the prompts like the 480p model. At least not on my test.

    itaskyJul 25, 2025· 1 reaction
    CivitAI

    it's very fast, the resolution 480*832 with a 4090 reaches 11s/it (on Ubuntu with triton/sage) but I noticed that if I change the CFG value other than 1 the speed halves. why?

    Choco7172Jul 25, 2025· 3 reactions

    Because that's how the "accelerator" LoRAs work, they achieve the speed up (well most of them at least, eg. CausVid, Lightx2v) by not needing to use CFG (aka it needs to be set at 1). Setting the CFG to 1 will instantly cut the gen time by half (and vice versa, hence you get doubles the gen time). There's one drawback to this tho, which is you can't use Negative Prompt if the CFG is 1, but there's a workaround to this, which is by using NAG (Normalized Attention Guidance). So, there's basically no drawbacks of not using CFG now.

    ravenerkr841Jul 25, 2025· 3 reactions

    When CFG > 1, the negative prompt is applied. When 1, the negative prompt is ignored, so the speed is doubled. So how do you put the negative prompt? Inject it into the model itself via the NAG node (like Lora). That's the magic.

    cooperdkJul 27, 2025

    FYI, my experience is that for AI, Linux is actually slower than Windows. Not much, but noticeable. I tested with Linux Mint, same version of the software, same CUDA version. It was like 5% slower.

    ZoeLeeBananaJul 25, 2025
    CivitAI

    Works like charm, good result is almost a guarantee <3

    AlberistJul 25, 2025
    CivitAI

    Has anyone else noticed it being hard to get motion on 2d images with this enabled? It works great for realistic/semi-realistic images, but I've had some difficulty getting results in more anime-style 2d scenes. But at the same time, I don't have a huge sample size and might be coming to the wrong conclusions based off some unlucky seeds.

    Ada321
    Author
    Jul 26, 2025

    No? All I mostly do is 2d style animations.

    AlberistJul 26, 2025

    Ada321 I'll keep trying, then. Maybe it's another setting somewhere.

    ZoeLeeBananaJul 26, 2025

    try add Anime Style to your positive prompt

    gambikules858Jul 26, 2025
    CivitAI

    Why T2V is better in i2v ? i2v lora is bad

    Ada321
    Author
    Jul 26, 2025

    It's not? Are you trying to use the image to video lora on the 720P model? Its made for the 480B model.

    qekJul 27, 2025

    Ada321 If yes, why not?

    EydahnJul 27, 2025
    CivitAI

    I'm testing it on WanGP, but i keep getting videos with weird lighting that make them totally unusable.. Also, it takes quite a while to generate, i've got 64gb of ram and a 3090, an it still takes like 8-9 minutes. Not sure if i'm messing something up, this is my config: 4 steps, 81 frames, cfg 1, shift scale 8, wan 2.1 720p

    flo11ok874Jul 28, 2025

    It's for 480p wan

    AlextskJul 28, 2025
    CivitAI


    where to find workflow?

    alex_e85863Jul 29, 2025

    Download the video example, it contains the workflow.

    R3G4LJul 28, 2025· 7 reactions
    CivitAI

    Maybe you can add Pusa LoRA for the next upload in this series to get all the essentials. Pusa is a great motion enhancer and works well with LightX2V. Will make all your motion LoRA shine.

    FastWan LoRA is also good with low steps, also nice for text2image. FastWan is like a much better version of Causvid(Pausvid).

    Mix FastWan 0.2 strength with LightX2V 1.0 and Pusa 1.0, good movement result as well increased quality at low steps.

    Also you can add TAEW2_1 safetensor if anyone wants to see your generation live preview as it runs. Use this. You can cancel a bad generation earlier if you don't like what you see halfway. Saving so much time, also its fun to watch it play and progress with each step.

    fronyaxJul 29, 2025

    Pusa lora is so large 4GB just for a lora 😥😥

    R3G4LJul 29, 2025

    fronyax Hmm, didn't realize it was that large, dang. You will have to work with what you can. I always go over the vram limit and work backwards from there until the dang thing works, you don't know if you don't try. With the blockswap I think about 16 - 17gb checkpoint model is the limit for 4070 Super 12gb cards.

    The only parts that takes long is the fp8 model offload and the tile decode, the generation is "normal" speed. Sage makes it faster for sure.


    I have to avoid the Pusa scheduler as it takes takes up too much resources and doesn't look as good. I use flowmatch_causvid scheduler with good results 6-10 steps if I like what I see with 4 steps.

    I don't know how much 40 blocks is equal to in terms of memory, but that is the limit, more system ram won't do any good for the model part. More ram is great for offloading the other LoRA, encoders etc, and large upscaling.

    --The Nunchaku team may be nearing completion in the coming months with their Nunchaku WAN 2.1 scheduled for their 0.4 release roadmap. Could be interesting, we'll see how accurate it is with the Nunchaku optimization. I do enjoy Nunchaku Flux and Nunchaku Kontext+Turbo LoRA so am excited about that. Even with Wan 2.2 out having a blazing fast low VRAM 2.1 is very good. Maybe the work done for 2.1 can easily apply for Wan 2.2--

    vAnN47Jul 31, 2025· 1 reaction

    hi, i don't find any information on how to add TAEW2_! and preview the generation, can you help me?

    KiefstormAug 8, 2025· 1 reaction

    Pusa + this lightning lora + Fusionx is great

    R3G4LAug 8, 2025· 1 reaction

    vAnN47 https://huggingface.co/Kijai/WanVideo_comfy/blob/main/taew2_1.safetensors

    For the live playback of your generation. Put this safetensor file in your "ComfyUI/Models/vae_approx" folder

    In your comfyui settings, go to VHS, at the bottom you toggle on the "Display animated previews when sampling"

    Hope this helps.


    diegomariopyande4640Jul 29, 2025
    CivitAI

    Hi ,
    Thanks for amazing loras.
    I dont know if you are the one who create finetune or just release but do you plan to release 2.2 self forcing version?

    mrmihaelJul 31, 2025· 3 reactions
    CivitAI

    Can I use this Lora with 5B model?

    pcmr522142Aug 1, 2025

    no

    Lora_AddictJul 31, 2025
    CivitAI

    In your newest Wan 2.2 WF you say in the notes that Lora FastWan_T2V_14B_480p_lora_rank_128_bf16 is used but it's not selected anywhere in the WF? Do i need it and if yes, where i have to use it?

    AltraJul 31, 2025
    CivitAI

    Any plans for a 720 version?

    itaskyJul 31, 2025
    CivitAI

    when I use it and add a lora that introduces anatomical parts they come in a saturated color tending towards red.. the only way to reduce the effect is to lower lightx lora from 1 to around 0.6 but this will cause the image to lose quality :(

    qekAug 2, 2025

    I use Euler Beta and lightx strength 1

    magicballoonJul 31, 2025
    CivitAI

    I'm trying to run this on a 3080 Ti. I am using the Q4_K_S quantized WAN model because I only have 12GB VRAM. Trying to run this with the rank 128 Self-Forced LORA and Pusa enabled results in absurdly slow generation, like 11 minutes for a single step. I used the workflow from one of the videos you posted.

    tdougherty350505Aug 3, 2025

    I am very new to this, but I have a similar GPU with 12GB VRAM. Q4 is too high, you should step it down to the Q3_K_M version, it works well for me.

    magicballoonAug 3, 2025

    tdougherty350505 It works with the native flow though. I ended up doing some research on this and I found a Github issue where Kijai talks about this. By his own admittance, the native nodes are much better at memory management, and they do it automatically. I'd rather not give up quality, so I'll continue using the native flow. I encountered another problem with this flow; increasing the block swap did get it to work, but the entire video is covered in what I can only describe as an orange filter. I decided it was not worth the effort to tinker with for now.

    gambikules858Aug 5, 2025

    tdougherty350505 i have 3060 12GB and Q4_K_S 4steps total = 120 sec for 480x320

    magicballoonAug 5, 2025

    gambikules858 That's cool, but I want to generate in 6 and even 8 steps. It makes a huge difference in the quality of the motion

    It's a moot point now anyway. I've decided that if I'm going to spend a lot of time on this to buy a better GPU. I can see the potential

    yaode360276Aug 3, 2025· 1 reaction
    CivitAI

    is this the same thing like lightX2v?

    qekAug 3, 2025

    Same

    LORA
    Wan Video 14B t2v

    Details

    Downloads
    4,993
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    7/16/2025
    Updated
    4/27/2026
    Deleted
    1/16/2026

    Files

    lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors

    Mirrors

    HuggingFace (49 mirrors)