CivArchive

    2.2

    For I2V this motion helper node is extremely useful:

    https://github.com/princepainter/ComfyUI-PainterI2V

    10/30 High lora was further refined.

    New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)

    https://github.com/VraethrDalkr/ComfyUI-TripleKSampler

    T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:

    2 steps high noise without the low-step lora at 3.5 CFG

    2 steps high noise with lora and 1 CFG

    2-4 steps low noise with lora and 1 CFG

    Its definitely a big improvement either way.

    T2V:

    Using their full 'dyno' model for your high model seems best.

    "On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."

    https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-250928-dyno/Wan2.2-T2V-A14B-4steps-250928-dyno-high-lightx2v.safetensors


    2.1

    7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.

    I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.

    I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa

    No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.

    Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models


    Old:
    lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.


    Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.

    Description

    FAQ

    Comments (26)

    fronyaxJun 6, 2025· 6 reactions
    CivitAI

    What are the purpose of these MPS lora and accvid lora??is it the same like causvid??

    compo6628585Jun 8, 2025· 1 reaction

    from what ive read/learnt from testing, accvid is basically the same as causvid (less steps needed for quality, but as motion issues as well). MPS is suppose to increase quality and prompt guidance on lower cfg (which it does, but it seems to have colour/lighting problems on some images. it also seems each image may need different settings to finesse). I just use causvid v2 at 0.3, its much better than the first version.

    mobdik17378Jun 6, 2025· 2 reactions
    CivitAI

    I tried using the caus/acc/mps trio at recommended strength, and I can't stop getting blaring, bright colors instead of details unless I use 20+ steps, which defeats the point

    amazingbeautyJun 6, 2025
    CivitAI

    even what is accvid lora do to i2v 480p 14b ?? explain please

    yallapapiJun 7, 2025
    CivitAI

    getting this error on wan2gp:

    Error

    "Error while loading Loras: Lora '{path}' contains non Lora keys '{trunc(invalid_keys,200)}'"

    OrionMoonstoneJun 9, 2025· 1 reaction
    CivitAI

    Just found this in the docs for SwarmUI. Super helpful when you're trying to figure out how to set this up! Also, this is a MAJOR speedup. My usual gens are 4 5 second videos strung together on a local rig using an RTX 3080 TI. Gen time went from 70 mins down to 40 with no noticeable loss in quality. Both I and my electric bill thank you! ^_^ https://github.com/mcmonkeyprojects/SwarmUI/blob/master/docs/Video%20Model%20Support.md#wan-causvid---high-speed-14b

    fronyaxJun 9, 2025· 16 reactions
    CivitAI

    After testing with i2v GGUF Q3 KS with Causvid & Accvid along with some LoRAs, here’s my conclusion,

    GGUF Workflow : https://filebin.net/tsi75bqsh9ipnhqg

    Pastebin : https://pastebin.com/5Vh2Bbmg

    1. Best results with two samplers workflow, Using a two sampler workflow with 6 steps (my sweetspot), combined with both Causvid and Accvid, produces the best results in my tests.

    - First sampler using 2 loras and CFG 3, step 0 to 3,

    - Second sampler using Causvid and Accvid (.5 and .7 strength), CFG 1, step 3 to 6.

    2. Improved prompt adherence, I'm using Str1p LoRA. Before combining Causvid and Accvid, getting good results felt like a gamble. Now, it consistently generates good outputs.

    3. Stable image quality,The image quality doesn’t degrade significantly, the subject’s face remains mostly consistent.

    4. Cleaner motion, There’s no noticeable motion blur. The motion is much cleaner, and overall motion quality is significantly improved compared to using only one LoRA (either Causvid or Accvid) or a single sampler.

    seductivelyai695Jun 9, 2025

    so what kind of speed improvement did you notice?

    DJLegendsJun 9, 2025· 3 reactions

    any chance you could uploaded a modified workflow?

    fronyaxJun 11, 2025

    @seductivelyai695 from 13 minutes down to 4-5 minutes same settings

    fronyaxJun 11, 2025
    DJLegendsJun 11, 2025

    @fronyax holy shit this is fast asf on the 5080

    vmrlgk9456491Jun 12, 2025

    GOAT

    dirkzenJun 13, 2025

    Holy crap, this workflow is amazing. I bumped up the steps slightly, and adjusted a few things to match the models I use, but this is super solid. I've been trying to get these AccVid and CausVid things to work for days without much good results, but this is pretty much perfect.

    My gen time went down from 15 minutes to like 5 (..and it still looks great!) I don't know what black magic you've got cooking in here, but this workflow needs to be pinned somewhere lmao.

    Bouncer_AIJun 14, 2025

    Your workflow is really cool, thanks for sharing. On a 16GB RTX4060, it takes me 4 minutes to make an 81 frame video at 320x640. Before, it took me 13 minutes with another workflow. Thanks a lot!

    fronyaxJun 14, 2025

    @Bouncer_AI @dirkzen @DJLegends @vmrlgk9456491 

    Your welcome guys

    @fronyax  could you upload the wf again please ? Filebin is saying : The file has been requested too many times. :-)

    Quan_ChiJun 15, 2025

    @Tschimm99999999999999999 I've just downloaded it by clocking on a "Download files" button and choosing Zip

    fronyaxJun 15, 2025

    @Tschimm99999999999999999 Download is still working, click the download button, don't click the file.

    xG00N3RxJun 15, 2025

    I'm using this and getting a weird issue where towards the end of the gen it gets very smeared and blurry, any ideas as to why this may be happening? Thanks in advance!

    7093904Jun 22, 2025

    @fronyax could you re-post that modified workflow. Your filebin links don't seem to last more than a few days. The pastebin link still works but none of the filebin links are working. Thanks

    crombobularJun 11, 2025· 1 reaction
    CivitAI

    how do you use this for t2v? can you make a wf for that.

    Tschimm99999999999999999Jun 15, 2025· 1 reaction

    use the emptyimage node instead of loadimage node

    osakadonJun 16, 2025
    CivitAI

    So I'm getting confused now with all of the links posted in the description. So the one Fusion X model can be used on its own, no need for a Accvid or causvid? I tried to load the workflow, but it only accepts .safetensors, no gguf so I was stuck and couldn't use it with the shared workflow.
    A little clarity/explanation help please.

    Ada321
    Author
    Jun 16, 2025

    Things are changing every day lately lol. Forget all the past models now, use the self forcing lora.

    LORA
    Wan Video 14B i2v 480p

    Details

    Downloads
    2,669
    Platform
    CivitAI
    Platform Status
    Available
    Created
    6/5/2025
    Updated
    5/16/2026
    Deleted
    -

    Files

    Wan21_AccVid_I2V_480P_14B_lora_rank32_fp16.safetensors