CivArchive

    2.2

    For I2V this motion helper node is extremely useful:

    https://github.com/princepainter/ComfyUI-PainterI2V

    10/30 High lora was further refined.

    New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)

    https://github.com/VraethrDalkr/ComfyUI-TripleKSampler

    T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:

    2 steps high noise without the low-step lora at 3.5 CFG

    2 steps high noise with lora and 1 CFG

    2-4 steps low noise with lora and 1 CFG

    Its definitely a big improvement either way.

    T2V:

    Using their full 'dyno' model for your high model seems best.

    "On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."

    https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-250928-dyno/Wan2.2-T2V-A14B-4steps-250928-dyno-high-lightx2v.safetensors


    2.1

    7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.

    I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.

    I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa

    No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.

    Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models


    Old:
    lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.


    Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.

    Description

    FAQ

    Comments (142)

    paulrattnerMay 16, 2025· 4 reactions
    CivitAI

    Holy crap, this actually works. It turns a 20 minute creation process into 3 minutes. No visible quality loss. Incredible.

    HikariasMay 16, 2025

    Which sampler and scheduler did you use?

    paulrattnerMay 16, 2025

    @Hikarias flowmatch causvid for the scheduler, but I don't even see a setting for sampler.

    CatzMay 16, 2025

    Woah that's awesome! How many steps and did you use 720p or 480p model?

    paulrattnerMay 16, 2025· 1 reaction

    @Catz 480. I just used the workflow embedded in the mp4 in the examples in this lora page. Steps is kind of odd. After you push steps up beyond about 8, the time taken flattens out to about 4 minutes for me, and doesn't get longer. It also doesn't seem to have much effect, so maybe there's an upper limit of some sort.

    paulrattnerMay 17, 2025

    OK. Try switching your scheduler to unipc/beta. large improvement in motion.

    firemanbrakeneckMay 16, 2025· 1 reaction
    CivitAI

    By finetunes, are you referring to vace / fun? I was unaware there were any community ones.

    How's the lora combination support? Fast / lightning have always given me grief with those.

    And the samples don't really seem to be doing much for wan vids, not the best presentation frankly; if you could near replicate some existing published prompts it might be more enticing.

    Ada321
    Author
    May 16, 2025

    There is:

    Moviegen:
    https://huggingface.co/ZuluVision/MoviiGen1.1

    Wan fun control:
    https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-14B-Control

    SkyreelsV2:
    https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P

    VACE:
    https://huggingface.co/Wan-AI/Wan2.1-VACE-14B

    And hopefully soon this animation finetune:
    https://huggingface.co/IndexTeam/Index-anisora

    Works fine with loras.


    And I just grabbed some random images to use for image to video real quick from civitai, any suggestions?

    HikariasMay 16, 2025

    @Ada321 Hi. Do you know what I2V and Image to Video models exist?

    Most of the ones I know are text to video only.

    pasunnazacrifaMay 16, 2025

    @Ada321 Do some of the portrait talking one face detailed and hand is something to worry when doing fast

    firemanbrakeneckMay 16, 2025

    @Ada321 Hmm, hadn't heard about moviegen / animation. Too bad it doesn't seem to have booru tagging, natural language word vomit is tedious.

    In that case, this guy made some awesome sfw previews (couldn't swing a dead cat around this place without hitting a dozen crazy nsfw samples), civ's metadata is lacking but the workflow appears to be attached in full to the vids: https://civitai.com/models/1525175/wan-i2v-skyreels-i2v-morphing-into-plushtoy-trained-on-sr-v2-i2v

    ZojixMay 16, 2025
    CivitAI

    Nice thanks, can work with skyreels v2 I suppose?

    MagicalEroticaMay 16, 2025
    CivitAI

    Does anyone know if there are differences between the Kajai models here: https://huggingface.co/Kijai/WanVideo_comfy/tree/main

    and the ones provided by Comfyui here?: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/diffusion_models

    Ada321
    Author
    May 16, 2025

    The causvid ones are the same.

    mylo1337May 16, 2025· 3 reactions
    CivitAI

    This also works with native comfy nodes/gguf by using the beta scheduler. So you don't even need to use the wrapper. (originally said AYS, but beta is even better and works with 2 steps)

    firemanbrakeneckMay 16, 2025

    Which model type, SD1 / SDXL / SVD? Or are there new ones?

    crombobularMay 16, 2025

    where do you get the align your steps scheduler

    firemanbrakeneckMay 16, 2025

    @crombobular Comfy core. Search alignyoursteps.

    crombobularMay 16, 2025· 1 reaction

    @firemanbrakeneck found it though it just produces garbled messes with my workflow, do you have a working example?

    mylo1337May 16, 2025

    Scratch that, beta scheduler is better for this. I got nearly the same quality results on 2 steps beta as I did on 4 steps AYS.

    sonofabeanMay 16, 2025

    @firemanbrakeneck In KSampler? allignyoursteps isn't in mine, I checked the samplers too just in case. I havee karrass, sgm, the usual.

    mylo1337May 16, 2025· 1 reaction

    @boz255 It's samplercustom. But you should use beta instead, it's even better, beta is built in to KSampler

    ptrprkrMay 16, 2025· 1 reaction

    @mylo1337 can you share your workflow? i keep trying but mine gens a jumbled mess of pixels and colors

    mylo1337May 16, 2025

    @ptrprkr I'm just using swarmui, and I'm doing i2v. 1 video cfg and 2 or 4 steps.

    firemanbrakeneckMay 16, 2025· 1 reaction

    @crombobular I too am getting little but noise on ays / beta, with 0.5/1/1.5 causvid weight, 8 steps, 1 cfg, 6 shift.

    @boz255 It's a different node called "AlignYourStepsScheduler", which outputs sigmas, like basicscheduler. But it's no good for me.

    wurrgit981May 16, 2025

    I would also love to see a workflow, pretty please? :)

    ptrprkrMay 16, 2025
    CivitAI

    For those with jumbled messy generations on subsequent tries from the 2nd or 3rd i2v, try disabling skip layer and teacache optimizations if that helps, it worked for me

    crombobularMay 16, 2025· 1 reaction

    i still get super messy generations even with those off. i guess this is better for i2v than t2v. am also trying to use it with native instead of kijai so that prob doesnt help

    aiaskMay 16, 2025· 2 reactions
    CivitAI

    Definitely seems to speed up generation but I can't seem to mix other lora's in with it and there's a noticeable drop in quality (I use GGUF though). It seems very likely my workflow just doesn't work well with it anyway.

    TurboCoomerMay 16, 2025

    how you made it work with default nodes in the first place?

    aiaskMay 16, 2025

    @TurboCoomer my workflows are on any of the more recent videos I've posted if you want to check for yourself but I am using some custom nodes through comfyui manager. All I did was just add in the lora, lowered the steps a little and tried both the normal (what I usually use) and beta schedulers

    Ada321
    Author
    May 16, 2025· 1 reaction

    Other loras seem to work fine with it, it works best with action / motion loras I would say. If you lose too much motion then turn the weight of this lora down and increase steps by just a bit. If you use regular schedulers then turning shift up as well can help.

    And and make sure not to use teacache or stuff like that.

    RedditUser981May 16, 2025
    CivitAI

    what is this i am using wan 2.1 i2v is this gona helpful for me ?

    6028976May 16, 2025

    It's the equivalent of fast lora but for Wan (fast lora was on Hunyuan) you can reduce the number of steps significantly while still maintaining quite decent quality so depending on how it work on your workflow, it can be a great time saver basically, and it seems to work decently with I2V yes

    CatzMay 16, 2025· 5 reactions
    CivitAI

    There's a discussion on reddit of people's test results settings - good as 2nd reference:
    https://www.reddit.com/r/StableDiffusion/comments/1knuafk/causvid_lora_massive_speedup_for_wan21_made_by/

    fronyaxMay 16, 2025· 1 reaction
    CivitAI

    I've seen noticeable degradation in subject's motion/movement quality with this LoRA. Other than that, it's a godsend.

    Ada321
    Author
    May 16, 2025· 1 reaction

    For more motion turn down this loras weight a bit more, you might have to add another 2 or so steps to make up for it. Or use a lora with motion trained into it. Its for sure best used with action loras.

    6028976May 16, 2025
    CivitAI

    Good for 'live wallpaper' style (limited) motion so far, quite impressive in fact, and fast thanks to the vastly reduced steps requirements, I will test more classical movement later, and I feel it will not be the same deal, but still, pretty good

    Ada321
    Author
    May 16, 2025

    It for sure works best with action loras which tend to override motions anyways. That said for just wan by itself you can turn down this loras weight a bit and add another few steps in exchange for more movement. You can also use a different scheduler and increase shift. You can also play with turning CFG back on just a bit.

    compo6628585May 16, 2025
    CivitAI

    Hopefully this will be updated, to help with the movement. If/when it does, it will be a massive game-changer imo, as it literally cut the gen times for me by easily 75%, while keeping video quality to a high level. I cant use this as is, its almost I2I for me 🤣

    Ada321
    Author
    May 16, 2025

    For movement either use loras based on actions / movements or decrease its weight a bit and increase steps a bit more. Also could use regular scheduler then increase shift a bit. You can also play with turning CFG back on just a bit.

    I had recommended 0.5 BUT that was with action loras that imparted their own movement, using it without movement from loras I can see what people mean though with weight that high.

    compo6628585May 16, 2025· 1 reaction

    @Ada321 Aye mate, i read thru all the other comments where you said about those things...i tried them all. Strength from 0.5 down to 0. Ive tried a few different loras (sex themed..dont judge lol). tried like ~7 different schedulers. even tried changing cfg, which was bad idea! Oddly for me tho, i gained movement with LESS steps, but when i get down to 4, the video started to degrade alot. Thx for this tho buddy, im sure everyone appreciates this, and im sure some peoples setups work better than others 👍

    Ada321
    Author
    May 16, 2025· 1 reaction

    @compo6628585 "i gained movement with LESS steps," Sounds like its overcooking it, / the weight is too high. Give me a bit, Ill try to find a better recommendation for using it without motion loras.

    Edit: While using a motion / action lora still works best injecting noise might also help. I was also playing with starting it without the lora for like 3 steps then doing the rest with it which works well but cuts the speed up in about half.

    Ada321
    Author
    May 17, 2025· 1 reaction

    Ok, reduce its weight to 0.3 or so, switch to unipc scheduler and use 12-15 steps. (Note that the flowfield_causvid scheduler limits itself to 9 steps max.) This should fix the movement for when you don't have a lora with motion trained into it.

    compo6628585May 18, 2025

    @Ada321 Thank you soo much for taking the time for testing my friend! These new tweaks have helped so much. Youve aced it!

    misoraMay 17, 2025
    CivitAI

    I tested using LORA, and it was possible to generate videos of the same quality as without using LORA, with about half the number of generation steps.

    If you can reduce the number of generation steps, not only will you be able to shorten the generation time, but you can also reduce the amount of VRAM used, so many people can benefit from LORA.

    LovelaceAMay 17, 2025· 8 reactions
    CivitAI

    Some observations after some short tests in 1 hour:

    1. On native comfyui wan workflow it works with unipc sampler/beta scheduler, seems also works with gradient estimation sampler.

    2. It can be used with other lora! Splendid!

    3. Noticable speed up on fp8 safetensor for both 480p and 720p. For 480P, a 30-50 frame video can be generatee under 1 minutes. For 720p, a bit longer but the effect is still huge compared to 10mins per generation without the lora.

    4. Does not quite work well with GGUF, at least in my test.

    5. Quality loss is acceptable, but it depends on the subject I guess. For video with more static object/limited subtle motion/very linear motion/object barely change these kind, the quality loss is not that obvious. But for larger motions it does have more impact.

    5. Really see great potential in this lora and technique......This can bring video generation speed to image generation level.

    ZojixMay 17, 2025
    CivitAI

    Get bad result with i2v, probably my workflow, is it possible to have a link to a good i2v workflow ?

    Ada321
    Author
    May 17, 2025· 4 reactions
    ZojixMay 17, 2025

    @Ada321 thanks!

    kkyy4545May 17, 2025

    @Ada321 I can't download it

    derispan6661071May 17, 2025

    @Ada321 Thanks!

    CyberfolkMay 17, 2025· 5 reactions
    CivitAI

    this changes everything for me

    CyclopsGERMay 18, 2025
    CivitAI

    Tested around and it works great with 25 frames (1 sec) and is very fast (needed 43 sec for generation)but as soon I use 97 frames its very slow (running more than 40 minutes).
    Triton is also activated and I have a 4090.

    Ada321
    Author
    May 18, 2025· 2 reactions

    Just sounds like your overflowing from vram to ram.

    CyclopsGERMay 18, 2025

    @Ada321 Thanks, not sure what to do now but I will look into this topic!

    lost_moonMay 18, 2025· 4 reactions
    CivitAI

    Some numbers using an RTX 5070ti, 720x480, 8 second video duration.

    Using this workflow https://civitai.com/articles/13328 the low-vram gguf version 3b,

    Models: WAN2.1 i2v Q5_K_M.gguf, Clip umt5-xxl-encoder-Q5_K_M.gguf, with automatic prompt of florence2 enabled.

    Enabled optimizations: speed regulation, CFGZeroStar, temporal attention, skip Layer, TeaCache (0.19), long video patch (5s+), TorchCompile, sageattention (native, without the node in the workflow, using "--use-sage-attention" launcher flag. Using triton-for-windows (haven't got triton native windows to run yet))

    Settings without causvid lora: no lora, CFG 4, steps 20

    Duration total: 490,67 seconds, second run 480,99s

    Causvid:
    Disabled: skip Layer (will create very noisy output if enabled together wit causvid lora)

    Settings with lora: lora weight 0.3, CFG 1, steps 15

    Duration total: 271,68 seconds

    I only did a few test runs, so I can't conclude a reduce of motion yet, as it might as well be seed variations. I did use 2 lora in my testing for motion as well, so keep that in mind, you might render even faster without lora and notice motion reduction. This comment is mostly about speed comparison :)

    please2000Aug 12, 2025

    care to share the workflow? If you still have it.

    lost_moonAug 12, 2025

    please2000 basically this workflow for the most part. https://civitai.com/articles/13328
    I don't know if my conclusion in the original comment is still relevant. Light2x lora 4 steps works really well. lora weight 1. rest as regular wan2.1 workflow. Set Teacache to disabled and weight 0.01 else it might bug out every 1+ generation.

    DarkAmbassadorMay 19, 2025
    CivitAI

    can it run with GGUF model? :)

    lordkek53May 19, 2025

    it does run with gguf model.

    DJLegendsMay 20, 2025

    works crazy with GGUF

    LatteLeopardMay 19, 2025
    CivitAI

    Absolute game changer. SageAttention + CausVid got me 5 second videos made in 240 seconds.
    Setup was the following:

    4090RTX
    Wan 2.1 480 14b FP16 (comfyOrg)
    SageAttention
    CausVid 0.3 strength

    CFG 1
    Steps 14
    3 Other Loras

    (No Teacache, it fucks quality up bad for me)

    So how exactly does this lora work?
    How is this even possible?
    It barely even has quality loss.

    mylo1337May 19, 2025· 3 reactions

    I use causvid at 0.5 strength with beta scheduler, 4 steps, 2 cfg. Other lora(s) like normal. It has nearly the same quality as 20 steps without the lora. It made local gens much more capable lol.

    It is just a speedup lora, like lcm, hyper, turbo, etc. But for videos, it was distilled to retain quality at low step counts. I've gotten good videos on 2 steps, great videos on 4.

    The reason teacache doesn't work well with the lora is because teacache is used to skip steps, but causvid completely changes how steps are paced, so teacache will almost always skip the wrong steps. Even then, causvid works with 4 steps already, so teacache would be pretty much useless there anyway.

    6927513May 20, 2025

    The causvid 14b lora works better than the causvid 1.3b lora when inferencing wan fun 2.1 v1 or v1.1 makes gens alot more temporally consistent and enhances motion a ton 0.3 strength unipc scheduler 15 steps

    6927513May 20, 2025

    @mylo1337 its improved all gens both high and low step for me its quite good

    LatteLeopardMay 20, 2025

    @mylo1337 thank you so much for the detailed explanation. Would youn mind explaining to me how the settings effect the generation with the Lora.

    Like what does high strength vs low strength do for the casuvid Lora?

    Does higher = faster or more quality motion?

    How does CFG effect quality and speed.

    I'm just curious how I can tweak it, if I ever find myself having trouble with a gen. Like if I wanted to sacrifice some speed for quality.

    gman_umschtMay 20, 2025

    How do you combine Causvid with the other Lora? rgthree PowerLoraLoader? Is the Causvid at 1st position or does it not matter?

    mylo1337May 20, 2025· 1 reaction

    @gman_umscht The position of the lora doesn't really matter. It's like comparing "1 + 2 + 3" with "1 + 3 + 2", you get the same result.

    You just gotta have causvid loaded onto the model somewhere, doesn't matter exactly when, as long as it's before the sampler node

    mylo1337May 20, 2025

    @LatteLeopard In my testing, the lora's strength being too low causes blurryness at low steps (similar to no lora) and setting it too high can cause it to kind of fry the videos (similar to high cfg).

    The sweet spots depend on the sampler, if you want something fast, 4 steps with beta scheduler (and a sampler like euler) can get good results at 0.5x strength.

    I've seen people use unipc simple for 12 steps as well.

    I use a cfg of 2, I don't know how much it affects movement though, I believe 2 at least seemed better in my tests than 1 for prompt adherence.

    Oh also, too low steps can cause shakiness I usually get a little shakiness on 2 steps and a lot on 1.

    jarvanMay 24, 2025

    @gman_umscht have you success use other lora? I tried several lora and they seems not able to work...

    yaode360276May 19, 2025· 1 reaction
    CivitAI

    It will lead to an increase in sharpness and saturation. How to solve this problem?

    darios_manaris245May 19, 2025
    CivitAI

    I get better/more natural movements and better promt adherence with higher causvid values ~0.7.

    0.1 absolutely destroys movement. 0.3 is mainly lora induced movement, but 0.7 is the sweet spot for me.

    Seems weird, since others report 0.3 as good value..

    Ada321
    Author
    May 19, 2025· 2 reactions

    I've been testing a ton as well, wanted to really hone in one a good set of settings before updating but yea, I was getting 0.7 to work as a really nice sweet spot lately, 90% of the motion quality, still good speed at 9 steps, playing with clown sampler to find the best for the job (inject just a little noise through the process to give it even more movement back), I'll try to update soon ish.

    And most people use loras that have actions trained in which in those cases don't really suffer from loss of motion quality / prompt following since the lora contains that. I'm testing complicated prompts / motions without lora assistance to find a good set of settings.

    AjaxdiffusionMay 20, 2025

    0.6-0.7 is also my sweetspot.
    at 0.3 i get only lora induced movements.

    DJLegendsMay 20, 2025

    @Ajaxdiffusion what does lora induced movement means?

    AjaxdiffusionMay 20, 2025
    CivitAI

    Fantastic work!
    Do you think it would be possible to somehow merge this LoRA into the Wan base model?
    Or am I totally oversimplifying things here? 😄

    mylo1337May 20, 2025· 1 reaction

    The lora was extracted from a base model.

    https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid

    Merging it would just reverse the process, losing a bit of quality in the process since the lora is only rank 32.

    crombobularMay 20, 2025
    CivitAI

    workflow for native too?

    gambikules858May 21, 2025

    jsut add this lora thats all

    gambikules858May 21, 2025
    CivitAI

    god 100sec 65length 960*544 on 3060 lol (1.3B 0.5 lora, 6 steps , 1cfg). Native workflow

    DRZ3000May 21, 2025
    CivitAI

    i found, what low cfg washes out liquid projectiles, only way to reduce that is increase cfg to 2, what increases generation time twice (61 frame from ~1:30 to ~3:30 with the parameters below).

    For me optimal parameters for 14B:

    - lora weight = 0.3-0.5

    - Steps = 8-10 (later I'm doing a second pass with a 1.3B model)

    - Cfg = 2

    - Sampler = uni_pc

    - Scheduler = normal\simple (normal I think slightly better).

    P.S. additionaly interesting schedulers for cfg = 1:

    -beta - creative

    -ddim_uniform - even more creative

    (Not usable with high cfg)

    Ada321
    Author
    May 21, 2025· 1 reaction

    Try 0.7 weight, 1.0 cfg, 1 shift (this makes it blurry otherwise), 9 steps, euler, beta. This is the best combo I have found after messing with it for hours.

    DRZ3000May 21, 2025

    @Ada321, Yeah, starts working better for me with your combo, but 10 steps and 4 shift.(otherwise liquid still washed out) Hard to interpolate to another source scene, need to testing further,anyway, thanks!

    DJLegendsMay 21, 2025
    CivitAI

    hmm i'm currently having an issue in which I have flashes at the begining of my videos with my generated output and idk how to remove that correctly
    video with embedded workflow here: https://files.catbox.moe/kkdzs0.mp4

    DJLegendsMay 21, 2025

    huh looks like your newest example also have that "flashing" issue at the beginning of video

    Ada321
    Author
    May 22, 2025

    remove teacache, gonna update it myself

    MYY2023May 22, 2025

    我偶尔会遇到更严重的闪烁问题,在我这里是因为CausVid lora的权重过高引起的(>0.5),所以第一个可能的修复方法是减少CausVid lora的权重至0.5以下;第二个方法你可以继续使用高权重的CausVid lora,但使用KJNodes里面的Get image or mask from batch剪掉最初的几帧。

    DJLegendsMay 22, 2025

    @Ada321 i actually not running teacache only sage attention

    blaMay 24, 2025

    I think that happens when you choose a non-supported resolution on i2v. (anything other than 480x832 or 832x480)

    MYY2023May 22, 2025· 1 reaction
    CivitAI

    以下配置为我测试的3步最佳设置:

    steps = 3

    sampler = uni_pc

    scheduler = kl_optimal

    lora weight = 0.7

    checkpoint = Vace_14b_Q4_K_S.gguf

    注:使用KJNodes的Get Image or Mask Range From Batch去掉最开始的10帧(当lora weight >0.5时最开始几帧会有闪烁),但是即时浪费掉10帧,这个设置的速度依然足够快!

    Ada321
    Author
    May 22, 2025

    3 steps did not work well for me but this works great at 15 steps

    MYY2023May 22, 2025

    @Ada321 或许关键是使用vace模型?我刚发布了两个使用vace14b+causVid制作的视频,里面包含了完整的参数,供你参考它的质量和设置。https://civitai.com/posts/17273283https://civitai.com/posts/17274126

    AlvinazaytsevaMay 22, 2025
    CivitAI
    please advise the best settings for i2v? euler beta cfg 1, 0.7 lora, 9 steps shift 1 = the picture falls apart like in the avengers thanos snap, shift 2-5 does it fine, better with unipc simple 12 steps. I use gguf 480p 14b
    Ada321
    Author
    May 22, 2025

    Try the latest workflow I posted. 0.7 weight, uni_pc, kl_optimal, 1.0 cfg, 15 steps, 1 shift, the woman shooting a gun also contains the WF, it is using a custom node though, can just reroute stuff if you don't want it.

    AlvinazaytsevaMay 22, 2025

    @Ada321 good, but slower than unipc simple 12 steps 5 shift on other workflow and have the +- same results

    kurtast88942May 22, 2025
    CivitAI

    Do I save your custom node as a .json file? I tried that and dropped it in custom_nodes but it doesn't detect anything. does it need to be in a subfolder in custom_nodes?

    Ada321
    Author
    May 22, 2025· 1 reaction

    It needs to be a .py file

    kurtast88942May 22, 2025

    I did realize that, sorry haha. I'm still not sure where to save it in custom nodes though... ComfyUi can't seem to detect it.

    Ada321
    Author
    May 22, 2025· 1 reaction

    @kurtast88942 How odd, maybe try adding a blank _init_.py file into the custom nodes folder.

    kurtast88942May 22, 2025

    That did it! Thanks so much! I appreciate all the work!!!

    poondoggleMay 22, 2025
    CivitAI

    I'm testing the brand new workflow and it works fine, but I'm seeing very slow speeds for having a mobile 5090. With Sage Attention enabled it takes about 4 1/2 minutes to render 5 seconds of video using the default settings of the workflow other than me changing the resolution to 416x608 (Half of the image size). Without Sage Attention it's over 5 minutes. I know the mobile 5090 is quite a bit slower than the desktop version, but roughly the same speed using another workflow without causvid but utilizing teacache and to my eyes the quality looks better. Anyone else faced with this? I feel like I'm missing something.

    Ada321
    Author
    May 22, 2025· 1 reaction

    Upon googling it the mobile 5090 seems like about 25% slower than a 4090 which takes me a bit less than 3 mins to gen 15 step 640 x 640, 81 frames on with fp8 fast and torch compile so that sounds about right.

    poondoggleMay 22, 2025

    @Ada321 Ok, that's good to know. I wonder why my teacache enabled workflow without causvid is about the same speed but with better quality. Thanks for your work here though. I've been learning a lot.

    Ada321
    Author
    May 22, 2025

    @poondoggle even if you use the same number of steps this should be x2 a fast due to not needing cfg. 15 steps is about half or less what you normally need without it for decent looking gens though so its more like a 4x speed up. Compare with the same res / frames.

    jj43797771May 22, 2025
    CivitAI

    So I really like this, I've been using it in T2V 14b and after a while I realized it is actually doing something that I can't figure out how to reduce.

    So when you run teacache into skiplayerguidance and turn that skip up high it creates a kind of stronger outline around things/ contrast. Now I have the model loader running straight into the shift and have bypassed all that and this lora causes that exact same thing except to a more severe degree. I've noticed it here too with peoples generations, a lot of my videos seem to put the subject ontop of the environment too, as in they feel like two separate things, it will generate the background env and then the person and they wont really fit together as well so I'm not sure how to remedy this. I've tried shift 1~3~5. cfg is 1 ofc, different samplers, multiple generations at different steps but it seems to happen no matter what.

    I just thought id report my experience if it may help you in some way. Still think its really cool!

    Its also making a lot of my subjects stay still sometimes too. strange

    Ada321
    Author
    May 22, 2025

    Don't use teacache with it, I noticed a huge degradation.

    jj43797771May 25, 2025

    @Ada321 yeah theres no teacache being used at all once i realized that. the problem is it also keeps generating the exact same person even with seeds on randomized due to cfg being 1, it just pulls from the dataset the lora was trained on. I think for t2v it might need some tweaking or a way around that

    HituhMay 22, 2025
    CivitAI

    I've been trying to troubleshoot this on my own, but I'm honestly out of ideas at this point.

    I'm using a Vace14b + Wan T2V setup, following a workflow (strongly similar to this https://www.youtube.com/watch?v=3tu-sTY0k6M) that involves masking objects with SAM and using a reference image along with some VACE magic to replace those objects. While the overall workflow is pretty solid, and causvid speed gains are incredible, I'm running into a recurring issue when using it. I sometimes need higher causvid strength to "help" in replacing the object, however higher strength casues some issues.

    Specifically, the video quality drops noticeably—to the point where faces, both in the background and sometimes even in the foreground, get heavily distorted. Details like teeth just become a white blur, and the overall video looks like it's been hit with a bad compression filter. I don't run into this problem when using something like 20 steps with DPM++, the non-masked part of the video looks exactly as in the original, but any CausVid attempts, regardless of sampler or step count, consistently introduce these artifacts.

    Has anyone else experienced this issue with CausVid? Is this just an inherent limitation of using it, or is there a workaround (maybe some scheduler/steps combination I didn't try yet) to preserve quality, especially in non-masked areas of the video?

    Ada321
    Author
    May 22, 2025· 1 reaction

    Vace needs a much lower weight, like 0.3, I also saw disabling every 5th block also helped a bunch.

    HituhMay 23, 2025

    @Ada321 Still not perfect but lower weight helped (0.5 for me). Also 5th block swap, and setting shift to 1. Thanks!

    BananaUnitedMay 23, 2025
    CivitAI

    There was no issue when I used this LORA at 0.3, but when I use it at 0.5 to 0.75, the video shows strong contrast. I used the usual I2V workflow — could something be wrong?

    Ada321
    Author
    May 23, 2025

    Did you use my workflow?

    YourmomdMay 26, 2025

    @Ada321 Is this workflow: https://files.catbox.moe/rz55fd.json the one with the 2 samplers? I cant figure out how to link 2 samplers together

    ESCANORSMay 23, 2025
    CivitAI

    Very nice lora! It even improved my training lora.

    But for some reason 1-2 frames will blur for a while.

    ESCANORSMay 23, 2025

    Is there a workflow that applies to T2V?

    Ada321
    Author
    May 24, 2025· 1 reaction

    @ESCANORS Just plug it in normally. You can even use it with cfg just to need less steps. Try something like 0.2 weight with the same settings as normal but with half the steps or so

    jarvanMay 23, 2025
    CivitAI

    Used your workflow and everything works well. However, when I try to add other loras, seems they just not working, no matter how I adjust the step and lora weight. Is there anything I missing?

    shab987May 24, 2025· 4 reactions
    CivitAI

    been using i2v model, 3~4steps, causvid 0.3~0.4, cfg 6~7 unipc simple, and then 2steps causvid 0.4~0.5, cfg 1. It preserves lora motion and sharpness.
    kl_optimal doesn't work for me. didn't use model sampling sd3.

    Ada321
    Author
    May 24, 2025

    You're right. That two step workflow is the best I've seen so far.

    tywhoMay 24, 2025· 5 reactions

    Would you mind sharing your workflow?

    shab987May 25, 2025· 2 reactions

    @tywho comfyui example wan i2v. and than two ksampler just like sdxl refiner. You can check comfyui example sdxl refiner to see how to use two samplers. I'm using default fp16 on 3080 10gb, can run up to 480p wide 5s or 720p wide 3s.

    v2v like inpainting or controlnet doesn't require two samplers, just cfg 1.0 and 3~6 steps is sufficient.

    blaMay 24, 2025· 1 reaction
    CivitAI

    Can't find CausVidControl.

    ComfyUI-WanVideoWrapper is already on nightly :/

    nrockaMay 24, 2025· 1 reaction

    You need to add the custom node mentioned in the "New edit" part of the description of this model. It's this one: https://files.catbox.moe/1ff7xc.py

    blaMay 25, 2025

    @nrocka i didnt think that was it. thanks

    7021319May 24, 2025· 2 reactions
    CivitAI

    In theory this is great, in practice, not so much. It kills the video part of the video lol. I'll keep trying it with different Loras and settings, but Im not impressed. I don't want "videos" with little to no motion. That's a picture.

    Ada321
    Author
    May 25, 2025· 3 reactions

    Use it a lower weight with cfg, then you can have the exact same motion with about half the steps, the quality loss is near imperceptible. Also a 2 step workflow fixes the issue with even less steps. Do about 4 steps with full cfg at 0.3 weight or so, then do a 2nd pass with 0.5-0.7 denoise for another 4 1 cfg steps.

    missionarymaniacMay 25, 2025· 3 reactions

    @Ada321 Can you please share the workflow for us people who aren't good at comfy? It's just a drag and drop for you, but a lot more work for us. Thanks

    Ada321
    Author
    May 26, 2025

    @stringieee968 Its the workflow link, try 0.6-0.8 denoise on the 2nd sampler.

    winifredslack61733May 25, 2025· 3 reactions
    CivitAI

    is there any 2 sampler workflow example with wrapper, not native nodes?

    Lora_AddictMay 26, 2025

    Just add a second one yourself. Connect it exactly like the existing one but connect latent_image of second one with LATENT from first one.

    slikvik55570Jun 5, 2025

    @marqs89 I can't get this to work and I've tried the advanced sampler. Loads of burn in or corruption.

    funscripter627May 26, 2025· 6 reactions
    CivitAI

    Am I crazy or does the workflow below contain only one sampler instead of 2 like the text above indicates?

    https://files.catbox.moe/tnkcoz.json

    Lora_AddictMay 26, 2025

    you are not crazy, it does only contain one

    Lora_AddictMay 26, 2025· 1 reaction

    Just add a second one yourself. Connect it exactly like the existing one but connect latent_image of second one with LATENT from first one.

    YourmomdMay 26, 2025

    @marqs89 Could you send me the workflow I can’t figure it out sorry

    taek75799May 26, 2025· 3 reactions
    qekMay 27, 2025· 1 reaction

    @funscripter627 The link below is incorrect (outdated), use this https://files.catbox.moe/rz55fd.json

    mdkbMay 28, 2025· 8 reactions
    CivitAI

    what a mess. how about a workflow without all the python coding nonsense that doesnt work anyway.

    gambikules858May 29, 2025

    how about official workflow + this lora ?

    Ada321
    Author
    May 29, 2025· 1 reaction

    "Python mess" You literally just drop the file into a folder but you don't need the node, its just to put all the settings in one node instead of having to change image size / seed / steps in several nodes every time. I updated back to kanji where he has custom nodes for the purpose for the posted workflow now if you don't want to do that.

    LORA
    Wan Video

    Details

    Downloads
    10,125
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/16/2025
    Updated
    5/15/2026
    Deleted
    -

    Files

    Wan21_CausVid_14B_T2V_lora_rank32_v2.safetensors

    Mirrors

    HuggingFace (40 mirrors)

    Wan21_CausVid_14B_T2V_lora_rank32.safetensors

    Mirrors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.