CivArchive

    2.2

    For I2V this motion helper node is extremely useful:

    https://github.com/princepainter/ComfyUI-PainterI2V

    10/30 High lora was further refined.

    New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)

    https://github.com/VraethrDalkr/ComfyUI-TripleKSampler

    T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:

    2 steps high noise without the low-step lora at 3.5 CFG

    2 steps high noise with lora and 1 CFG

    2-4 steps low noise with lora and 1 CFG

    Its definitely a big improvement either way.

    T2V:

    Using their full 'dyno' model for your high model seems best.

    "On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."

    https://huggingface.co/lightx2v/Wan2.2-Lightning/blob/main/Wan2.2-T2V-A14B-4steps-250928-dyno/Wan2.2-T2V-A14B-4steps-250928-dyno-high-lightx2v.safetensors


    2.1

    7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.

    I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.

    I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa

    No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.

    Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models


    Old:
    lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.


    Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.

    Description

    FAQ

    Comments (135)

    Finoo125Jun 16, 2025
    CivitAI

    Very cool news. Is the self forcing lora also suitable for i2v?

    Ada321
    Author
    Jun 16, 2025· 1 reaction

    That is what I've been using it for, apparently works for VACE as well, not sure about phantom.

    funscripter627Jun 16, 2025

    @Ada321 Phantom works with this lora too

    jasoncccJun 16, 2025
    CivitAI

    What is the suggested steps for self forcing 14B? Any needs to have 2 samplers? Thanks

    funscripter627Jun 16, 2025

    It's in the description: "Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift."

    jasoncccJun 16, 2025

    @funscripter627 Yes, I can see it, but it's example only, as I wanna know recommended steps for this lora.

    DigitalGarbageJun 16, 2025
    CivitAI

    I don't get good results with i2v when using Self-Forcing lora, everything is too noisy. As you said, lcm, 4 steps, 1 cfg, 8 shift. What I can possibly do wrong?

    Ada321
    Author
    Jun 16, 2025· 2 reactions

    Still testing everything myself, those were the first set of settings people seemed to be getting good results with and what I made the example image with. Kijai now also seems to be having good results with dpm++/sde and custom sigmas 1.000, 0.9121, 0.7480, 0.0039 so far, but its brand new, everyone is testing stuff out still. https://files.catbox.moe/sdu9eu.mp4

    funscripter627Jun 16, 2025· 2 reactions

    I'm having better results staying with uni_pc, 6 steps and 9 shift. lcm gets me some weird movement.

    DigitalGarbageJun 16, 2025· 1 reaction

    Yeah, it seems like Kijai's wrapper itself is spoiling all the fun. Running native WAN results in actually good outputs somehow.

    Ada321
    Author
    Jun 16, 2025

    @DigitalGarbage Really? I have not tried native with it so far, guess I need to.

    DigitalGarbageJun 16, 2025

    @Ada321 Yeah, just native Comfy WanImageToVideo and that's it. 1 cfg, you can set shift or don't, I can't see any difference. Almost any sampler works.

    And by the way I am loading it through power lora loader from rgtree, maybe it somehow matters.

    funscripter627Jun 16, 2025· 1 reaction

    Getting some good results with lcm and native nodes now too. Shift seems to have a way bigger influence than I'm used to (or I just never realized lol) I'm testing with the Phantom base gguf model btw.

    DigitalGarbageJun 16, 2025

    Also, don't recommend using rewards lora since it's replacing original faces and bodies somehow.

    RedditUser981Jun 16, 2025

    kindly please share workflow

    DigitalGarbageJun 16, 2025· 1 reaction
    Ada321
    Author
    Jun 16, 2025· 2 reactions

    @DigitalGarbage Only if you use it at a high weight, it works well at 0.4-0.5 without that effect in my usage and it really does help with prompt following.

    DigitalGarbageJun 16, 2025

    @Ada321 I'm using it with native WAN at 1.0 with no problems. Literally, almost no sampler except dpm-based is causing troubles in generations, either 4, 6 or 8 steps. Simple scheduler is our god and savior.

    Also, you could try out my set of nodes, there is a prompt enhancer with an ability to write system prompts and set top_k, top_p, temperature and max tokens: https://github.com/olivv-cs/ComfyUI-FunPack

    osakadonJun 17, 2025

    Wow, I'm getting lost now. So what combination of Wan 2.1 model and what lora should I be using?
    Fusion X Wan model with LightX2v lora?

    fronyaxJun 17, 2025

    @osakadon Just use the base WAN T2V/I2V model with a self-forcing LoRA, it's better than CauseVid+AccVid combined already.

    FusionX while it's good it's already has several speed loras merged in (CauseVid, AccVid, MPS, and Moviigen), so you don’t want to use it with a self-forcing lora, it’s already too diluted with speed loras, imo.

    osakadonJun 17, 2025

    @fronyax Do you have a recommendation workflow to use with FussionX? I'm new to Wan and video generation.

    Choco7172Jun 17, 2025

    @osakadon If you're okay with not using ComfyUI and you just want to try AI video gens right now without much hassles, maybe try WanGP by DeepBeepMeep (google it). It has most of the best video gen models out there, all in a simple gradio interface. Easiest way to install is via Pinokio app. It's called "Wan 2.1" there.

    osakadonJun 17, 2025

    @Choco7172 I'd ultimately like to get comfortable with ComfyUI, but I'll check your suggestion too.
    So if I want to use FussionX I can just find a simple gguf workflow that only needs the video model and no lora and I should be good to go?

    Choco7172Jun 17, 2025

    @osakadon You mean for ComfyUI or WanGP? If ComfyUI, I'm afraid I'm not the best person to answer that question because I've never used it before :X But WanGP itself has added support for FusionX (just a few days ago). It also support AccVid, CausVid, and most LoRAs (like maybe 99.99% of all LoRAs shared here in CivitAI). All without the need to find or install any workflows, just install Pinokio, install WanGP and it's good to go (it will download the model automatically the 1st time you tried to generate a video using a particular model tho, so don't be surprised if it looks to be not running). WanGP sadly doesn't support GGUF at the moment but most models can be run on as low as 6GB VRAM (the name WanGP refers to "Wan GPU Poor" so yeah, it's aimed for anyone with less than 12GB VRAM). The dev is very active, like he does updates a few times a week or so when there's new models or LoRAs released.

    osakadonJun 18, 2025

    @Choco7172 I tried to install WanGP (without pinocio), but when I ran it, it gave me an error. I tried to install it again. Same error, got frustrated and gave up with it.

    NW666Jun 27, 2025

    @DigitalGarbage could you kindly share me the links for the other loras you are using in the power lora loader? the usual_lora_booster, Wan14B_RealismBoost, DetailEnhancerV1, usual_lora-v3, Wan2.1-Fun-14B-InP-MPS.

    Thanks :)

    blaJun 16, 2025· 2 reactions
    CivitAI

    I always run out of memory with those workflows. If anyone can make it work for 16gb let me know.

    ukidjp515Jun 17, 2025

    If you use Kijai's version, connect block_swap_args of the WanVideo BlockSwap node to the WanVideo Model Loader node and set the value of blocks_to_swap of the WanVide BlockSwap node to 20-40. Larger value requires smaller VRAM (but it requires enough main memory).

    OR WanVideo VRAM Management node to the Model Loder node.

    skyrimer3dJun 21, 2025

    I have 16gb of VRAM too, try virtual_vram_gb to 0.0 in the latest workflow, to my surprise this helped me avoid OOM.

    compo6628585Jun 16, 2025· 2 reactions
    CivitAI

    Thx for keeping us updated on all these relevant new models/Loras ada! This is cutting edge stuff. Im trying one sampler atm, getting inconsistent results. its either burning the image or its fuzzy. trying different samplers and lora strength. Anyone come up with a good combo, please post it here. thx

    Update: Yeah this has big motion issues (or is it cfg guidance issues?) like causvid v1 had. Hopefully Kijai can update to a version 2 (like he did with causvid), as this lora is brilliant for speed, much better than causvid. For now im still testing with two samplers, but may end up going back to causvid v2 which has much better motion.

    compo6628585Jun 16, 2025· 1 reaction

    Id say a good place to start with one sampler is: lora strength 1. 8 steps. sampler- euler_ancestral . scheduler- beta.

    I get some what ok results with these.

    UPDATE: same settings as before, but using sampler DPMPP.2M and SGM_UNIFORM scheduler. Starting to look much better!

    RedditUser981Jun 16, 2025

    can you share your work flow

    compo6628585Jun 16, 2025

    @kumarkishank959811 i use a slightly self modified version of this runpod template: https://civitai.com/models/1317373/runpod-wan-21-img2video-template-comfyui

    MankyPoodleJun 16, 2025
    CivitAI

    Any recommendations for using this on a 4070ti with 12g VRAM? I was able to get all the prereq installed. I also linked in the block swapping. No errors, but renders wont finish.

    I get to the render stage and I'm stuck at 0/4 steps. GPU usage is almost 100% and VRAM usage is almost 100% as well.

    Do I need to adjust the block swapping?

    funscripter627Jun 16, 2025

    Yes with 12g you need to block swap. I have a 4080 and I swap around 20 blocks

    blo01Jun 16, 2025

    Disable TorchCompileModel node if you use native WF
    I get stuck as well with it , idk why , both inductor/cudagraph get stuck
    Kijai workflow seems to work better

    MankyPoodleJun 17, 2025

    @funscripter627 Im using block swap. Any recommended settings on the block swap? Are you using the default 10?

    MankyPoodleJun 17, 2025

    @blo01 I turned off that node - no difference :(

    elis_tsJun 16, 2025
    CivitAI

    Cannot download workflows for Self-Forcing.

    Ada321
    Author
    Jun 16, 2025· 1 reaction

    right click and save as

    elis_tsJun 16, 2025

    @Ada321 thanks. I had to switch Browser. Librewolf would not allow the download.

    LovelaceAJun 17, 2025· 5 reactions
    CivitAI

    So now Causvid/Accvid is in the past now.....God this envolve fast.....

    RedditUser981Jun 19, 2025

    share your fast workflow gguf including loras as well

    slikvik55570Jun 17, 2025· 2 reactions
    CivitAI

    Hold on...so we're not using FUSION anymore already? Use this instead?

    NeoAnthropoceneJun 17, 2025
    CivitAI

    Thanks for sharing.

    It also works for start and end frame on Kijai's Wrapper with block swapping method @576x1024 81 frames. It took less than 2 minutes with 16GB VRAM and 96GB DRAM.

    LSPJun 17, 2025· 2 reactions

    Can you share the workflow with end frame? much appreciate

    WalternateJun 18, 2025

    Would love to see your workflow!

    CyberAImaniaJun 17, 2025· 1 reaction
    CivitAI

    Apologies if this is a dumb question, but what base model should I use with this LoRA? My setup is an RTX 4090 with 96GB DDR5 RAM.

    espinozaaJun 17, 2025· 1 reaction

    download a workflow and drag it into comfyui. It will show all missing nodes and the required models. Download the specified model from civitai and place it into required model folder.

    itaskyJul 21, 2025

    espinozaa where to get the workflow?

    afterclassJun 17, 2025· 3 reactions
    CivitAI

    4090 24g, workflow: https://files.catbox.moe/nj8aid.json , Prompt executed in 419.59 seconds

    7989930Jun 18, 2025

    I'm getting a KSampler Triton not found error even though ComfyUI's python says it's installed. Any ideas?

    houdh235914Jun 17, 2025· 1 reaction
    CivitAI

    I have a glow effect on my video. Why is this happening? (((

    bhoppingJun 18, 2025· 1 reaction

    Is it artifacts? If so, that usually happens at 6 steps. Also try the causvid lora strength somewhere between 0.2-0.5 and you could try a higher CFG like 6 instead of 1?

    theinternetspeaks671Jun 20, 2025· 1 reaction

    Clearing VRAM after each generation helped me. Or use the new template which has it intergrated

    RedditUser981Jun 17, 2025· 4 reactions
    CivitAI

    anybody here with 6gb vram kindly please share your work flow for this one

    mistporyvaevJun 17, 2025· 5 reactions

    sorry I have 4gb vram only 🤭

    shab987Jun 18, 2025

    With torch.compile, 832x480@5s only use 6-7gb vram. Fp16 model, the 28.6gb one. I can run 720p@5s with 10gb vram.

    With 64gb ram, there will be some offloading to ssd, it only affects first step or so, 96gb is the new bar.

    Native nodes.

    mistporyvaevJun 18, 2025

    @shab987 I generate 49 frames 480x720 videos with 4gb vram and 32gb ram via ComfyUI. It's pretty slow of course 🫠

    skyrimer3dJun 17, 2025
    CivitAI

    Got an OOM message when i was 360/731 during "Loading model and applying LoRA weights" (!!! Exception during processing !!! Allocation on device / torch.OutOfMemoryError: Allocation on device),. Have a decent rig with 32gb RAM 16gb VRAM, is this normal, any help?

    funscripter627Jun 17, 2025· 1 reaction

    Yes, you need to swap some blocks or lower the resolution and length. There should be a blockswap node near the model loader. I swap around 20 blocks with my 12G vram and 32GB RAM

    yorgashJun 17, 2025· 1 reaction

    Also length (frame size) directly multiplies the amount of RAM needed:
    See you might be able to run it 61-81 frames, but even with 80GB you can't run it at 400 frames :)

    bhoppingJun 18, 2025· 2 reactions

    I had the same issue until adding the blockswap node HIGHLY RECOMMEND for no OOM. I set mine to 40. All you gotta do is just add it in between your model loader and lora loader. After doing so, I can now run the 720p q8 model (16gbvram btw) instead of the 480p q4. I can go all the way up 720x1280 w/o OOM with 10 minute generation but i like sticking to semi-low res and upscaling later just for faster gen times. I'm also using other optimizations too

    skyrimer3dJun 18, 2025

    @bhopping thanks ill try what you say!

    skyrimer3dJun 18, 2025

    @funscripter627 Thanks i'll try that.

    skyrimer3dJun 18, 2025

    @yorgashlol yeah i'll keep frames reasonable then 

    DJLegendsJun 17, 2025
    CivitAI

    Can anyone explain to me why the latest Lora cannot utilize nsfw loras with 2d material?
    I tried using NSFW Fix lora which doesn't work and the latest attached workflow.

    DJLegendsJun 17, 2025

    if the lora isn't trained for 2d that would make sense but the example above is 2d xD

    houdh235914Jun 18, 2025
    CivitAI

    I add "Apply RifleXRoPE WanVideo" to increase the animation time, but there are various glitches in the movements. How else can I increase the time from 5 seconds to 10, for example?

    Lora_AddictJun 18, 2025· 3 reactions
    CivitAI

    Wow, this is developing so fast! Self-Forcing is amazing! I use this in SwarmUI and the WAN 2.1 base model, super easy, 5 steps, around 2 minutes generation time, great results. Movement also seems way better then with CauseVid.

    Lora_AddictJun 18, 2025· 1 reaction

    It's CRAZY good for i2v! Thank you so much!
    Movement from the first frame, it's actually doing almost exactly what i say in the prompt, fast generation, decent quality.. that's a game changer for me!

    amazingbeautyJun 18, 2025

    2 min which gpu ?

    Vyxen808Jun 18, 2025· 1 reaction

    do u have a workflow for use basic wan 2.1 and this new Self-forcing LoRa?

    TheFunkJun 18, 2025· 1 reaction

    Hi, would you be so kind as to share a full list of what you're using. I'm getting great results from the FusionX model and lora in SwarmUI but Everything i try with Lightx2v is garbage, like just awful quality, noise and artifacts everywhere. If you'd let me know what Base model you're using. quantised? sampler and scheduler, steps, sigma shift, any other settings i might be missing. I thought i hasd a good handle on SwarmUI but Lightx2v just won't play ball with me. I'm on 16gb vram but FusionX is looking stunning on that. Any help hugely appreciated.

    Lora_AddictJun 19, 2025

    @amazingbeauty 4090

    Lora_AddictJun 19, 2025

    @Vyxen808 i don't use workflows, i use SwarmUI :) 

    Lora_AddictJun 19, 2025

    @TheFunk 
    I use the wan2.1-i2v-14b-480p-Q4_K_M.gguf model

    Sampler UniPC / Scheduler Simple

    Steps 5

    CFG 1

    Sigma Shift i don't even know what that is or if i can set this in SwarmUI :D

    The end result quality seems very dependent on the input image quality for me.

    dulburisJun 18, 2025· 4 reactions
    CivitAI

    You forgot to add "purge vram" nodes. To prevent video degradation.

    Sobsob_Jun 18, 2025

    how does that prevent video degradation ? Just looking at the names looks more like preventing OOM exceptions

    dulburisJun 18, 2025

    @Sobsob_ If you generate many videos without restarting comfy UI - generated videos start to ignore the starting image and then just become full of green artifacts.

    "Clean VRAM Used" and "Purge VRAM" nodes prevent that from happening.
    I've seen those nodes used in other workflows.

    Sobsob_Jun 18, 2025· 1 reaction

    @dulburis oh ok thanks, i'll use that too then.

    sniperboss38Jun 20, 2025

    @dulburis May I ask where in the workflow you should place these nodes?

    4458749Jun 23, 2025

    complete nonsense, fix your shit... It probably has something to do with using the wanvideowrapper which is terrible. Just use --lowvram --reserve-vram 1.0 as start up options and use native nodes. wanvideowrapper nodes are slow and if it oom it does not clear the vram and you have to restart comfy.

    WideanonJun 19, 2025· 3 reactions
    CivitAI

    I tried self-forcing with other loras. But the movement is very stiff & is lack off. It's very clear if you compare just Wan+loras vs Wan+light2x+loras.

    Is there a workaround? lowering light2x >0.7 seems to kill quality

    Ada321
    Author
    Jun 19, 2025

    Try out the new 2 sampler WF I just posted. Still working on the best values, might need its denoise tweaked or another step added in the end.

    WideanonJun 24, 2025

    I tried your new work flow but they key take away is that Genning with 24fps gives you much better results but then we get 3 seconds vid instead of 5 seconds. I suspect self-forcing used 24fps distilled.

    Gen speed is indeed better but movement is worsen. I don't know if self-forcing's a net gain.

    wewewewJun 20, 2025· 4 reactions
    CivitAI

    I get way better results (anime i2v) with just normal euler beta, 2 steps at cfg 3 and 3 steps at cfg 1, shift 8. Only lightx lora at 0.8 strength (other content loras can be added, just not other accel loras).

    LCM sampler is okay but usually euler is better. Flowmatch scheduler is wonky. Also I don't think adaptive guidance does anything if you have it set to cfg 1, since its purpose is to change to cfg 1 at a certain treshold (which it doesn't seem to hit on WAN), but maybe you know something extra it does.

    keybladeJun 26, 2025

    can you share the workflow? i dont know how to do the magic "2 steps at cfg 3 and 3 steps at cfg 1"

    wewewewJun 27, 2025· 1 reaction

    @keyblade my current workflow: https://files.catbox.moe/ar35ny.json

    example: https://files.catbox.moe/co4oqk.mp4

    I'm currently testing NAG instead of CFG, the proper nodes just came out. There's a toggle button to use CFG instead.

    I cleaned up the workflow in the json, removed some optional nodes you might not want installed (they need manual tweaking or the newest version doesn't work), one is a faster upscaler using waifu2x and the other is just to force the "before upscale" video to actually output before the upscale, which randomly doesn't happen otherwise. If you want them, the workflow in the example video has them.

    keybladeJun 27, 2025

    @wewewew Thank you very much! I will study it carefully

    p1042779030337Jun 21, 2025
    CivitAI

    Latest workflows are deadly slow and with ugly results.

    Best workflow is still dy4s8g

    yugmotogx399Jun 21, 2025

    Could you provide the link?

    hazzoom82659Jun 23, 2025· 1 reaction

    @p1042779030337 Actually that workflow dy4s8g is giving me out of memory, I have two cards installed & replaced few nodes for multigpu workflow, RTX 4080 16 GB & a GTX 1060 6 GB, maybe this workflow needs more than 24 GB or I am doing something wrong here.

    p1042779030337Jun 24, 2025· 1 reaction

    Well, it sounds strange. I'm using it on a T4 with 15G. And it's also pretty tweakable. And I'm seeing now that OP put it back on the front page.

    hazzoom82659Jun 24, 2025

    @p1042779030337 I made further investigations & learned some new things I never imagined !! , in my case having this combo of two different generations of nvidia cards, is a situation that PyTorch can't deal well with it (the workflow you posted here should work Ok with other scenarios, but people like me, they need some few pre-cautions, like just offload/shift lightweight tasks to the 2nd GPU (like Clip , ClipVIsion), & use the (Vae Decode tiled) node instead of the standard Vae Decode node,,).

    Like that the dy4s8g workflow came out responsive & working & reasonably fast ,, so , yeah some workflows are indeed made well to handle the overload & some are just resources hungry without full benefit, & some multigpu setups are not 100% ok with pytorch if the multiple cards are not from close generations/series.

    J1BJun 25, 2025

    @p1042779030337  Do you know where the git repository for FlowMatchingSigmas node is in this workflow? ComfyUI manager is not detecting it in the missing node search and Google isn't helping either.

    Mu5hr00moOJun 25, 2025· 1 reaction

    i recommend to edit node file and change max shift to 20 or 30

    wlmsgJun 21, 2025· 4 reactions
    CivitAI

    Good at 480p i2v,but not 720p.I'm used wan2.1_i2v_720p_14B_fp8_scaled.And it is very well at vace 720p.

    skyrimer3dJun 21, 2025· 3 reactions
    CivitAI

    Movement becomes really stiff to non existent with higher frames. I tried your trick and set 6.0 sigma_max with 100 shift, but this produced and very bright and washed out video, although with much better movement. Any ideas?

    flo11ok874Jun 22, 2025

    Use fusionX Lora with it. (or if you want full control fusionx lora ingredients and change strange of MPS or other node).

    skyrimer3dJun 22, 2025

    @flo11ok874 interesting i'll try that

    Sobsob_Jun 22, 2025

    Use a 2 step workflow : first part without lightXV for movement, then upscale with LiightXV wanFun controlnet

    skyrimer3dJun 22, 2025

    @Sobsob_  My ability to make workflows is limited, besides i've found on civitai the "FusionX_Ingredients_Workflows" that has great movement and it's quite fast, and no OOM for me using the gguf version, so for now i'll wait and see if this self forcing wfs are actually going anywhere, speed is not everything if the result is not worth it.

    skyrimer3dJun 22, 2025

    @flo11ok874 It gave a bit more movement using Wan2.1_I2V_14B_FusionX_LoRA.safetensors but still rather stiff, however the image colors were very saturated, so not worth it imho.

    flo11ok874Jun 22, 2025· 2 reactions

    @skyrimer3d That's way I told you to try FusionX ingredients (workflow has 5 new Loras with links to download) (this 5 Loras was before at FusionX, when You have every ona single node you can change each strengh for better results). Also u can easier try LightX2v lora @ 1.0 or 0.8 + FusionX Lora @ 0.3 or 0.4

    SlavrixJun 23, 2025
    CivitAI

    How do you get this working on a 24gb card?
    I keep getting Out of memory using the 14b t2v model with the self-forcing

    2027rfJun 24, 2025· 1 reaction

    I have the same 24 GB. Try using it "wan2.1-i2v-14b-480p-Q8_0.gguf"

    EternalJun 26, 2025

    @2027rf You are using it on I2V and it works? that's great, I only use WAN with I2V.

    SAY_AIJun 23, 2025· 1 reaction
    CivitAI

    It seems that kijai's WanVideoWrapper cannot use the 2 sampler workflow.

    0l1v1aR0551Jun 23, 2025
    CivitAI

    this LORA is fucking amazing!!!

    made my own WF ... ;)

    osakadonJun 24, 2025

    Please share, because I keep getting errors with this one.

    0l1v1aR0551Jun 24, 2025· 1 reaction

    @osakadon I will create an article with it soon! Follow me to get it

    J1BJun 25, 2025

    Do you use it with FusionX Wan or just the original Wan 2.1?

    0l1v1aR0551Jun 25, 2025

    @J1B OG WAN 2.1 Q-8

    flo11ok874Jun 25, 2025

    @osakadon Just use standard ComfyUI wan + lora wk with native nodes. Add Lightx2 as Lora and thats it. Sampler LCM, 4 steps, lenght 81

    0l1v1aR0551Jun 25, 2025· 3 reactions

    @flo11ok874 yes and no - that sampler from the description is VERY good !!! do add it to your "regular" WF instead ;)

    Frosty_Nectarine2413231Jun 26, 2025
    CivitAI

    Can you use self forcing on I2V? Thanks

    EternalJun 26, 2025

    Did you test it for I2V? Does it work?

    ShubzWorldJun 26, 2025

    @Eternal yes it works

    PapahoyJun 26, 2025
    CivitAI

    workflow is damaged. flowmatch... is not available.

    zoroofcalls378Jun 27, 2025
    CivitAI

    I don't get it. This workflow just has 1 sampler method or am I blind? https://files.catbox.moe/dy4s8g.json

    qekJun 28, 2025

    Of course! The second sampler node is connected to nothing! I think the uploader has been trolling

    Ada321
    Author
    Jun 28, 2025· 1 reaction

    @2P2 I apparently uploaded the wrong one and have been busy with other stuff for awhile. I changed it.

    zoroofcalls378Jun 29, 2025

    @Ada321 yes this one I was using before. This is the better version.

    3dasdmanJun 28, 2025
    CivitAI

    i have replaced causvid v2 wth selfforcing lora in my workflow. movement and action is much better, but realistism of characters is worse.
    is it possible to get only part how actors behave?

    BinaryBottleBakeJul 1, 2025
    CivitAI

    Am I doing something wrong? I'm using the workflow suggested, and its working as far as generating videos, but the videos quickly get very strange with random colors everywhere.

    zoroofcalls378Jul 2, 2025
    CivitAI

    when using different workflow like Simple TXT to Video and using the settings recommended here I get videos with much noise and bright colors.

    felipe781Jul 6, 2025
    CivitAI

    I can't get this lora to do anything. I get the speed up, but in return, i get almost no motion whatsoever, with the recommended settings. If i use the txt2vid model with self forcing, it works perfectly 🤔

    Ada321
    Author
    Jul 7, 2025
    reauvenialon467Jul 8, 2025
    CivitAI

    so basically its speed up the generation time?

    gambikules858Jul 11, 2025

    yes cause 4 steps and 1cfg.

    cloudreadypcJul 14, 2025
    CivitAI

    I'm using VACE with ref image and video. For Lightx2v weight of 0.7, people are unrecognizable (i.e. face changed to another person). To keep identity consistent, Lightx2v weight of 1.0 is needed but it'd result in over-saturated and noisy videos. For a saving of 2 steps (6 steps vs 8 steps), Causvid seems to handle job better for VACE.

    blobby99Jul 16, 2025

    Mess with samplers and schedulers. Let's just say I discovered the 'recommenced' ones were ruining the video quality with accelerator LoRAs. So much time wasted on more faulty 'common knowledge'. A colormatch node off your ref image should fix a lot of colour issues (tho not the ones caused when the model is going crazy).

    LORA
    Wan Video 14B t2v

    Details

    Downloads
    6,673
    Platform
    CivitAI
    Platform Status
    Available
    Created
    6/16/2025
    Updated
    5/15/2026
    Deleted
    -

    Files

    Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors

    Mirrors

    HuggingFace (65 mirrors)

    Available On (2 platforms)

    Same model published on other platforms. May have additional downloads or version variants.