2.2
For I2V this motion helper node is extremely useful:
https://github.com/princepainter/ComfyUI-PainterI2V
10/30 High lora was further refined.
New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)
https://github.com/VraethrDalkr/ComfyUI-TripleKSampler
T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:
2 steps high noise without the low-step lora at 3.5 CFG
2 steps high noise with lora and 1 CFG
2-4 steps low noise with lora and 1 CFG
Its definitely a big improvement either way.
T2V:
Using their full 'dyno' model for your high model seems best.
"On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."
2.1
7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.
I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.
I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa
No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.
Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models
Old:
lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.
Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.
Description
FAQ
Comments (135)
Very cool news. Is the self forcing lora also suitable for i2v?
That is what I've been using it for, apparently works for VACE as well, not sure about phantom.
@Ada321 Phantom works with this lora too
What is the suggested steps for self forcing 14B? Any needs to have 2 samplers? Thanks
It's in the description: "Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift."
@funscripter627 Yes, I can see it, but it's example only, as I wanna know recommended steps for this lora.
I don't get good results with i2v when using Self-Forcing lora, everything is too noisy. As you said, lcm, 4 steps, 1 cfg, 8 shift. What I can possibly do wrong?
Still testing everything myself, those were the first set of settings people seemed to be getting good results with and what I made the example image with. Kijai now also seems to be having good results with dpm++/sde and custom sigmas 1.000, 0.9121, 0.7480, 0.0039 so far, but its brand new, everyone is testing stuff out still. https://files.catbox.moe/sdu9eu.mp4
I'm having better results staying with uni_pc, 6 steps and 9 shift. lcm gets me some weird movement.
Yeah, it seems like Kijai's wrapper itself is spoiling all the fun. Running native WAN results in actually good outputs somehow.
@DigitalGarbage Really? I have not tried native with it so far, guess I need to.
@Ada321 Yeah, just native Comfy WanImageToVideo and that's it. 1 cfg, you can set shift or don't, I can't see any difference. Almost any sampler works.
And by the way I am loading it through power lora loader from rgtree, maybe it somehow matters.
Getting some good results with lcm and native nodes now too. Shift seems to have a way bigger influence than I'm used to (or I just never realized lol) I'm testing with the Phantom base gguf model btw.
Also, don't recommend using rewards lora since it's replacing original faces and bodies somehow.
kindly please share workflow
@kumarkishank959811 https://files.catbox.moe/gufe6r.json
@DigitalGarbage Only if you use it at a high weight, it works well at 0.4-0.5 without that effect in my usage and it really does help with prompt following.
@Ada321 I'm using it with native WAN at 1.0 with no problems. Literally, almost no sampler except dpm-based is causing troubles in generations, either 4, 6 or 8 steps. Simple scheduler is our god and savior.
Also, you could try out my set of nodes, there is a prompt enhancer with an ability to write system prompts and set top_k, top_p, temperature and max tokens: https://github.com/olivv-cs/ComfyUI-FunPack
Wow, I'm getting lost now. So what combination of Wan 2.1 model and what lora should I be using?
Fusion X Wan model with LightX2v lora?
@osakadon Just use the base WAN T2V/I2V model with a self-forcing LoRA, it's better than CauseVid+AccVid combined already.
FusionX while it's good it's already has several speed loras merged in (CauseVid, AccVid, MPS, and Moviigen), so you don’t want to use it with a self-forcing lora, it’s already too diluted with speed loras, imo.
@fronyax Do you have a recommendation workflow to use with FussionX? I'm new to Wan and video generation.
@osakadon If you're okay with not using ComfyUI and you just want to try AI video gens right now without much hassles, maybe try WanGP by DeepBeepMeep (google it). It has most of the best video gen models out there, all in a simple gradio interface. Easiest way to install is via Pinokio app. It's called "Wan 2.1" there.
@Choco7172 I'd ultimately like to get comfortable with ComfyUI, but I'll check your suggestion too.
So if I want to use FussionX I can just find a simple gguf workflow that only needs the video model and no lora and I should be good to go?
@osakadon You mean for ComfyUI or WanGP? If ComfyUI, I'm afraid I'm not the best person to answer that question because I've never used it before :X But WanGP itself has added support for FusionX (just a few days ago). It also support AccVid, CausVid, and most LoRAs (like maybe 99.99% of all LoRAs shared here in CivitAI). All without the need to find or install any workflows, just install Pinokio, install WanGP and it's good to go (it will download the model automatically the 1st time you tried to generate a video using a particular model tho, so don't be surprised if it looks to be not running). WanGP sadly doesn't support GGUF at the moment but most models can be run on as low as 6GB VRAM (the name WanGP refers to "Wan GPU Poor" so yeah, it's aimed for anyone with less than 12GB VRAM). The dev is very active, like he does updates a few times a week or so when there's new models or LoRAs released.
@Choco7172 I tried to install WanGP (without pinocio), but when I ran it, it gave me an error. I tried to install it again. Same error, got frustrated and gave up with it.
@DigitalGarbage could you kindly share me the links for the other loras you are using in the power lora loader? the usual_lora_booster, Wan14B_RealismBoost, DetailEnhancerV1, usual_lora-v3, Wan2.1-Fun-14B-InP-MPS.
Thanks :)
I always run out of memory with those workflows. If anyone can make it work for 16gb let me know.
If you use Kijai's version, connect block_swap_args of the WanVideo BlockSwap node to the WanVideo Model Loader node and set the value of blocks_to_swap of the WanVide BlockSwap node to 20-40. Larger value requires smaller VRAM (but it requires enough main memory).
OR WanVideo VRAM Management node to the Model Loder node.
I have 16gb of VRAM too, try virtual_vram_gb to 0.0 in the latest workflow, to my surprise this helped me avoid OOM.
Thx for keeping us updated on all these relevant new models/Loras ada! This is cutting edge stuff. Im trying one sampler atm, getting inconsistent results. its either burning the image or its fuzzy. trying different samplers and lora strength. Anyone come up with a good combo, please post it here. thx
Update: Yeah this has big motion issues (or is it cfg guidance issues?) like causvid v1 had. Hopefully Kijai can update to a version 2 (like he did with causvid), as this lora is brilliant for speed, much better than causvid. For now im still testing with two samplers, but may end up going back to causvid v2 which has much better motion.
Id say a good place to start with one sampler is: lora strength 1. 8 steps. sampler- euler_ancestral . scheduler- beta.
I get some what ok results with these.
UPDATE: same settings as before, but using sampler DPMPP.2M and SGM_UNIFORM scheduler. Starting to look much better!
can you share your work flow
@kumarkishank959811 i use a slightly self modified version of this runpod template: https://civitai.com/models/1317373/runpod-wan-21-img2video-template-comfyui
Any recommendations for using this on a 4070ti with 12g VRAM? I was able to get all the prereq installed. I also linked in the block swapping. No errors, but renders wont finish.
I get to the render stage and I'm stuck at 0/4 steps. GPU usage is almost 100% and VRAM usage is almost 100% as well.
Do I need to adjust the block swapping?
Yes with 12g you need to block swap. I have a 4080 and I swap around 20 blocks
Disable TorchCompileModel node if you use native WF
I get stuck as well with it , idk why , both inductor/cudagraph get stuck
Kijai workflow seems to work better
@funscripter627 Im using block swap. Any recommended settings on the block swap? Are you using the default 10?
@blo01 I turned off that node - no difference :(
Cannot download workflows for Self-Forcing.
So now Causvid/Accvid is in the past now.....God this envolve fast.....
share your fast workflow gguf including loras as well
Hold on...so we're not using FUSION anymore already? Use this instead?
Thanks for sharing.
It also works for start and end frame on Kijai's Wrapper with block swapping method @576x1024 81 frames. It took less than 2 minutes with 16GB VRAM and 96GB DRAM.
Can you share the workflow with end frame? much appreciate
Would love to see your workflow!
Apologies if this is a dumb question, but what base model should I use with this LoRA? My setup is an RTX 4090 with 96GB DDR5 RAM.
4090 24g, workflow: https://files.catbox.moe/nj8aid.json , Prompt executed in 419.59 seconds
I'm getting a KSampler Triton not found error even though ComfyUI's python says it's installed. Any ideas?
I have a glow effect on my video. Why is this happening? (((
Is it artifacts? If so, that usually happens at 6 steps. Also try the causvid lora strength somewhere between 0.2-0.5 and you could try a higher CFG like 6 instead of 1?
Clearing VRAM after each generation helped me. Or use the new template which has it intergrated
anybody here with 6gb vram kindly please share your work flow for this one
sorry I have 4gb vram only 🤭
With torch.compile, 832x480@5s only use 6-7gb vram. Fp16 model, the 28.6gb one. I can run 720p@5s with 10gb vram.
With 64gb ram, there will be some offloading to ssd, it only affects first step or so, 96gb is the new bar.
Native nodes.
@shab987 I generate 49 frames 480x720 videos with 4gb vram and 32gb ram via ComfyUI. It's pretty slow of course 🫠
Got an OOM message when i was 360/731 during "Loading model and applying LoRA weights" (!!! Exception during processing !!! Allocation on device / torch.OutOfMemoryError: Allocation on device),. Have a decent rig with 32gb RAM 16gb VRAM, is this normal, any help?
Yes, you need to swap some blocks or lower the resolution and length. There should be a blockswap node near the model loader. I swap around 20 blocks with my 12G vram and 32GB RAM
Also length (frame size) directly multiplies the amount of RAM needed:
See you might be able to run it 61-81 frames, but even with 80GB you can't run it at 400 frames :)
I had the same issue until adding the blockswap node HIGHLY RECOMMEND for no OOM. I set mine to 40. All you gotta do is just add it in between your model loader and lora loader. After doing so, I can now run the 720p q8 model (16gbvram btw) instead of the 480p q4. I can go all the way up 720x1280 w/o OOM with 10 minute generation but i like sticking to semi-low res and upscaling later just for faster gen times. I'm also using other optimizations too
@bhopping thanks ill try what you say!
@funscripter627 Thanks i'll try that.
@yorgashlol yeah i'll keep frames reasonable then
Can anyone explain to me why the latest Lora cannot utilize nsfw loras with 2d material?
I tried using NSFW Fix lora which doesn't work and the latest attached workflow.
if the lora isn't trained for 2d that would make sense but the example above is 2d xD
I add "Apply RifleXRoPE WanVideo" to increase the animation time, but there are various glitches in the movements. How else can I increase the time from 5 seconds to 10, for example?
Wow, this is developing so fast! Self-Forcing is amazing! I use this in SwarmUI and the WAN 2.1 base model, super easy, 5 steps, around 2 minutes generation time, great results. Movement also seems way better then with CauseVid.
It's CRAZY good for i2v! Thank you so much!
Movement from the first frame, it's actually doing almost exactly what i say in the prompt, fast generation, decent quality.. that's a game changer for me!
2 min which gpu ?
do u have a workflow for use basic wan 2.1 and this new Self-forcing LoRa?
Hi, would you be so kind as to share a full list of what you're using. I'm getting great results from the FusionX model and lora in SwarmUI but Everything i try with Lightx2v is garbage, like just awful quality, noise and artifacts everywhere. If you'd let me know what Base model you're using. quantised? sampler and scheduler, steps, sigma shift, any other settings i might be missing. I thought i hasd a good handle on SwarmUI but Lightx2v just won't play ball with me. I'm on 16gb vram but FusionX is looking stunning on that. Any help hugely appreciated.
@amazingbeauty 4090
@Vyxen808 i don't use workflows, i use SwarmUI :)
@TheFunk
I use the wan2.1-i2v-14b-480p-Q4_K_M.gguf model
Sampler UniPC / Scheduler Simple
Steps 5
CFG 1
Sigma Shift i don't even know what that is or if i can set this in SwarmUI :D
The end result quality seems very dependent on the input image quality for me.
You forgot to add "purge vram" nodes. To prevent video degradation.
how does that prevent video degradation ? Just looking at the names looks more like preventing OOM exceptions
@Sobsob_ If you generate many videos without restarting comfy UI - generated videos start to ignore the starting image and then just become full of green artifacts.
"Clean VRAM Used" and "Purge VRAM" nodes prevent that from happening.
I've seen those nodes used in other workflows.
@dulburis oh ok thanks, i'll use that too then.
@dulburis May I ask where in the workflow you should place these nodes?
complete nonsense, fix your shit... It probably has something to do with using the wanvideowrapper which is terrible. Just use --lowvram --reserve-vram 1.0 as start up options and use native nodes. wanvideowrapper nodes are slow and if it oom it does not clear the vram and you have to restart comfy.
I tried self-forcing with other loras. But the movement is very stiff & is lack off. It's very clear if you compare just Wan+loras vs Wan+light2x+loras.
Is there a workaround? lowering light2x >0.7 seems to kill quality
Try out the new 2 sampler WF I just posted. Still working on the best values, might need its denoise tweaked or another step added in the end.
I tried your new work flow but they key take away is that Genning with 24fps gives you much better results but then we get 3 seconds vid instead of 5 seconds. I suspect self-forcing used 24fps distilled.
Gen speed is indeed better but movement is worsen. I don't know if self-forcing's a net gain.
I get way better results (anime i2v) with just normal euler beta, 2 steps at cfg 3 and 3 steps at cfg 1, shift 8. Only lightx lora at 0.8 strength (other content loras can be added, just not other accel loras).
LCM sampler is okay but usually euler is better. Flowmatch scheduler is wonky. Also I don't think adaptive guidance does anything if you have it set to cfg 1, since its purpose is to change to cfg 1 at a certain treshold (which it doesn't seem to hit on WAN), but maybe you know something extra it does.
can you share the workflow? i dont know how to do the magic "2 steps at cfg 3 and 3 steps at cfg 1"
@keyblade my current workflow: https://files.catbox.moe/ar35ny.json
example: https://files.catbox.moe/co4oqk.mp4
I'm currently testing NAG instead of CFG, the proper nodes just came out. There's a toggle button to use CFG instead.
I cleaned up the workflow in the json, removed some optional nodes you might not want installed (they need manual tweaking or the newest version doesn't work), one is a faster upscaler using waifu2x and the other is just to force the "before upscale" video to actually output before the upscale, which randomly doesn't happen otherwise. If you want them, the workflow in the example video has them.
@wewewew Thank you very much! I will study it carefully
Latest workflows are deadly slow and with ugly results.
Best workflow is still dy4s8g
Could you provide the link?
@p1042779030337 Actually that workflow dy4s8g is giving me out of memory, I have two cards installed & replaced few nodes for multigpu workflow, RTX 4080 16 GB & a GTX 1060 6 GB, maybe this workflow needs more than 24 GB or I am doing something wrong here.
Well, it sounds strange. I'm using it on a T4 with 15G. And it's also pretty tweakable. And I'm seeing now that OP put it back on the front page.
@p1042779030337 I made further investigations & learned some new things I never imagined !! , in my case having this combo of two different generations of nvidia cards, is a situation that PyTorch can't deal well with it (the workflow you posted here should work Ok with other scenarios, but people like me, they need some few pre-cautions, like just offload/shift lightweight tasks to the 2nd GPU (like Clip , ClipVIsion), & use the (Vae Decode tiled) node instead of the standard Vae Decode node,,).
Like that the dy4s8g workflow came out responsive & working & reasonably fast ,, so , yeah some workflows are indeed made well to handle the overload & some are just resources hungry without full benefit, & some multigpu setups are not 100% ok with pytorch if the multiple cards are not from close generations/series.
@p1042779030337 Do you know where the git repository for FlowMatchingSigmas node is in this workflow? ComfyUI manager is not detecting it in the missing node search and Google isn't helping either.
https://github.com/BigStationW/flowmatch_scheduler-comfyui
Just read the notes.
i recommend to edit node file and change max shift to 20 or 30
Good at 480p i2v,but not 720p.I'm used wan2.1_i2v_720p_14B_fp8_scaled.And it is very well at vace 720p.
Movement becomes really stiff to non existent with higher frames. I tried your trick and set 6.0 sigma_max with 100 shift, but this produced and very bright and washed out video, although with much better movement. Any ideas?
Use fusionX Lora with it. (or if you want full control fusionx lora ingredients and change strange of MPS or other node).
@flo11ok874 interesting i'll try that
Use a 2 step workflow : first part without lightXV for movement, then upscale with LiightXV wanFun controlnet
@Sobsob_ My ability to make workflows is limited, besides i've found on civitai the "FusionX_Ingredients_Workflows" that has great movement and it's quite fast, and no OOM for me using the gguf version, so for now i'll wait and see if this self forcing wfs are actually going anywhere, speed is not everything if the result is not worth it.
@flo11ok874 It gave a bit more movement using Wan2.1_I2V_14B_FusionX_LoRA.safetensors but still rather stiff, however the image colors were very saturated, so not worth it imho.
@skyrimer3d That's way I told you to try FusionX ingredients (workflow has 5 new Loras with links to download) (this 5 Loras was before at FusionX, when You have every ona single node you can change each strengh for better results). Also u can easier try LightX2v lora @ 1.0 or 0.8 + FusionX Lora @ 0.3 or 0.4
How do you get this working on a 24gb card?
I keep getting Out of memory using the 14b t2v model with the self-forcing
It seems that kijai's WanVideoWrapper cannot use the 2 sampler workflow.
this LORA is fucking amazing!!!
made my own WF ... ;)
Please share, because I keep getting errors with this one.
@osakadon I will create an article with it soon! Follow me to get it
Do you use it with FusionX Wan or just the original Wan 2.1?
@J1B OG WAN 2.1 Q-8
@osakadon Just use standard ComfyUI wan + lora wk with native nodes. Add Lightx2 as Lora and thats it. Sampler LCM, 4 steps, lenght 81
@flo11ok874 yes and no - that sampler from the description is VERY good !!! do add it to your "regular" WF instead ;)
Can you use self forcing on I2V? Thanks
Did you test it for I2V? Does it work?
@Eternal yes it works
workflow is damaged. flowmatch... is not available.
I don't get it. This workflow just has 1 sampler method or am I blind? https://files.catbox.moe/dy4s8g.json
Of course! The second sampler node is connected to nothing! I think the uploader has been trolling
@2P2 I apparently uploaded the wrong one and have been busy with other stuff for awhile. I changed it.
@Ada321 yes this one I was using before. This is the better version.
i have replaced causvid v2 wth selfforcing lora in my workflow. movement and action is much better, but realistism of characters is worse.
is it possible to get only part how actors behave?
Am I doing something wrong? I'm using the workflow suggested, and its working as far as generating videos, but the videos quickly get very strange with random colors everywhere.
when using different workflow like Simple TXT to Video and using the settings recommended here I get videos with much noise and bright colors.
I can't get this lora to do anything. I get the speed up, but in return, i get almost no motion whatsoever, with the recommended settings. If i use the txt2vid model with self forcing, it works perfectly 🤔
For image to video its better to use a 2 step workflow, try this one:
https://civitai.com/models/1719863/wan-21-i2v-nativeorgguf-self-forcinglightx2v-dual-sampling-for-better-motions
so basically its speed up the generation time?
yes cause 4 steps and 1cfg.
I'm using VACE with ref image and video. For Lightx2v weight of 0.7, people are unrecognizable (i.e. face changed to another person). To keep identity consistent, Lightx2v weight of 1.0 is needed but it'd result in over-saturated and noisy videos. For a saving of 2 steps (6 steps vs 8 steps), Causvid seems to handle job better for VACE.
Mess with samplers and schedulers. Let's just say I discovered the 'recommenced' ones were ruining the video quality with accelerator LoRAs. So much time wasted on more faulty 'common knowledge'. A colormatch node off your ref image should fix a lot of colour issues (tho not the ones caused when the model is going crazy).
Details
Files
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Mirrors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.