2.2
For I2V this motion helper node is extremely useful:
https://github.com/princepainter/ComfyUI-PainterI2V
10/30 High lora was further refined.
New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)
https://github.com/VraethrDalkr/ComfyUI-TripleKSampler
T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:
2 steps high noise without the low-step lora at 3.5 CFG
2 steps high noise with lora and 1 CFG
2-4 steps low noise with lora and 1 CFG
Its definitely a big improvement either way.
T2V:
Using their full 'dyno' model for your high model seems best.
"On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."
2.1
7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.
I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.
I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa
No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.
Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models
Old:
lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.
Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.
Description
FAQ
Comments (148)
Just downloaded the updated IV2 and I'm getting the following error: "Error while loading Loras: Lora 'loras_i2v\\Wan21_I2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors' contains non Lora keys '['blocks.0.diff_m', 'blocks.1.diff_m', 'blocks.10.diff_m', 'blocks.11.diff_m', 'blocks.12.diff_m', 'blocks.13.diff_m', 'blocks.14.diff_m', 'blocks.15.diff_m', 'blocks.16.diff_m', 'blocks.17.diff_m', '...'"
Are you using it with a 14B 480P image to video model? I tested with both kijai's wrapper and native and it works.
@Ada321 Yup. Although I'm using Wan on Pinokio, so that might be the conflict. This is the first time I'm seeing this error, both I2V and T2V presents the same error.
Maybe? Apparently the T2V one is having issues for some people: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v/discussions The I2V is working fine though.
Same issues with WanGP through Pinokio, I'm assuming anyone using WanGP is having the same problem?
I reuploaded it. Should be fixed.
@Ada321 Seems to be all good now, thanks for addressing it so quickly, you the goat
T2V with the new V2 model just give noisy images. Any ideas?
Same workflow that works with the V1 T2V lora.
Seems to be a issue others are having atm. I'm gonna hide it for now:
https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v/discussions
yeah its the same for me, seems to be broken for now
For now the new I2V lora seems to work better than the old T2V lora did even for T2V.
Good news, looks like they responded to the thread on HF and fixed the issue with Lora keys. The fixed T2V is posted.
I reuploaded the fixed version.
Actually, looks like they did the same fix for the I2V loras as well, and just uploaded new versions for both.
Hmmm it still broken for me
@TheQuacktastic Huh, both it and the old one somehow worked for me.
@Ada321 Strange, i'll just use the I2V one for now iguess, that one works fine
Yeah spoke too soon. Still broken for me.
The I2V lora doesn't work either. The keys are not being loaded, so nothing is actually being patched to the from the lora.
@Griphen116 The I2V lora is not working for you? Can you use a bank WF just to make sure? Because it is 100% working for me. I would not be getting the kind of results I have been getting with 4 steps otherwise. Also make sure your comfy is updated.
@Griphen116 @Ada321 Kijai just extract and drop couple of T2V Loras with different ranks https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v
Reuploaded again with Kijais
@Ada321 And now Kijay add a lot of new I2v Loras with diff ranks ;]
I had no luck with the older ones but the new version works great! Speed and great movement!
Do you have any workflow ?
@So6sson For me Basic workflow with Lora works fine. From comfyUi wiki: 'Wan2.1 Video LoRA Workflow'
Nice.. I just used a typical WAN i2v workflow and added this Lora. Super quick and good quality.
I just used my normal workflow with this lora, changed the CFG and sampler to LCM...
How is AccVideo LoRA trained? Can you provide a link?
I like the speed and quality but, for me anyway, I'm having trouble getting it to work with Loras. Either it ignores the lora or the prompt or both. And more than one lora kills the quality. Steps = 4, CFG = 1.0, Lcm, simple, denoise = 1.0
Yea it depends of Loras. But try diff strength of this Lora (for exemple 0.6 Lightx2v) and/or other Loras (from 0.9 and under). Sometimes it helps a lot.
I had great success with .35 on this one, causevid stuff im my experience needs to be set weak compared to everything else
This is the best i2v solution I've found so far. I'm getting Kling level quality which is something I've been trying to achieve for some time.
This is now the top dog once again , the motion is muched improved! Thanks for keeping things updated 🤜🤛
@compo6628585 you are talking aboy the selfforcing i2v lora? the first option?
@CharlieBrown0115 thats the one!
text to video generates noise any ideas?
What does the r138 version ? Thks for the work by the way
I feel like the Loras are too strong. If I put in any Loras, it feels like they're being applied 10 times stronger. The prompt itself also feels like it's being applied 10 times stronger. The motion is bigger, which is good, but it's not controllable.
Are you using any cfg? This is supposed to be used with 1.0 cfg.
Ada321 Yes, I'm using cfg=1, shift=8. I loaded Lora using the 'lora manager' node that I used in my existing workflow. I'll keep researching.
ravenerkr841 Lightx2v works different with different other Loras. Try this Lora at lower strength. I got good result with 0.6 but it depends of your other Loras.
Wha it self-forcing 14b i2v r128?
Where did you get it?
Its the new version of the self-forcing lora from lightx2v but with more more ranks, bigger file size, maybe better quality than the one they released directly. I think it's from kijai here: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v
There is also a comparison video between the different ranks in that repo i linked, but i'm not sure if the video is comparing the I2V lora or the T2V lora.
R128 has some solid improvements in motion handling indeed! Thanks for sharing.
Will it work on wan Vace?
yes
For Wan Vace 14B
Excellent quality, excellent speed!
At least in my own workflow, however, setting CFG to 1 also means Comfy won't take a new base image. If, for instance, I run the workflow with base image a.png, then subsequently try with b.png, that second run will behave as if I'm using some garbled ersion of b.png. Anybody know what's going on? And/or, does anyone have a workflow in which this doesn't happen?
We wrote our comments simultaneously it seems. Sounds like we have the same problem. Something is breaking somewhere. I tried purging nodes and unloading ones, nothing working so far.
Bypass teacache and skip layer guidance if you're using them. They're being confounded by the sheer impudence of this LoRa. As would any sane node, one would imagine. They weren't designed for such tomfoolery. Anyway, that's what fixed it for me.
Ponder_Stibbons Well, that fixed it for me as well. Thanks, friend!
This is a strange one I can't figure out. This is friggin amazing, really amazing. But it breaks something in comfy every time I use it and I have to restart it. First gen is perfect, and practically instantaneous. It will run a second time, but every gen after the first is a grainy mess, pretty much what a 4 step run would normally be. I can't see anything in the terminal to indicate a problem. Restarting comfy lets me run it fine again, just once. Was wondering if anyone else had seen this. Using 14BI2V, with config listed in the description.
I haven't seen anything like that before. Try using the default native wan WF to try and single out the issue.
Ada321 I just located the issue. TeaCache is resetting too late and trying to reuse old data. Why this would happen I don't... oh crap. Layer skipping. Why didn't I think of that. This makes sense. There's a whole lot of crap in my workflow that is obviated by the LoRa that I hadn't thought to bypass to begin with. Not needed.... ah that took five seconds...bypassing teacache and skip layer guidance resolves the problem. Shouldn't be a problem enabling on the second stage, as there is actually stuff for the nodes to do with the second sampler.
I'm surprised that more people didn't have this same problem immediately as well. Seems like it should happen to anyone who enables this LoRa in a memory-optimized workflow. I should have tried an additional VRAM purge first probably, that might have fixed it too, albeit leaving me with superfluous nodes. It seems like the LoRa does some major tinkering to the model that is contaminating the poor VRAM like virtual variola. In any event, ditching the caching and skipping worked, and keeping them on the second stage is fine for a low denoise t2v as long as a different model is used. Despite the description, I'm keeping my cleanup stages. Minor headscratchers aside, this thing really kicks ass. How restrictive it is remains to be seen... but still, wow.
Ah, yea don't use teacache with this. It would not really give any speed up when only using 4 steps anyways.
Works great for motion but I find that the image quality is lower than the FusionX Base models, Seem to work with FusionX + Lora but with a lower lora setting, still trying to find a good balance for Lora strength for motion vs base model for image quality.
Probably just the detail and the moviigen lora that is included in fusionx, try using those with this.
18 minutes with sage attention and tea cache. Now with this its down to 4 minutes. And no quality drops! Totally crazy lol.
Wow thats impressive, mind sharing your workflow? The workflows i have, im seeing an improvement but only by a margin. Sounds like your improvement is vastly different than mine. Quite possibly the worflow as I use the fusionX ones which is a bit of a spaghetti monster with custom nodes.
Really impressive. Better quality and faster generation. A 5 second video "only" takes 4-5 minutes with my ancient 2080ti (11GB VRAM).
Thanks for sharing!
The best combination of Loras/Sampling settings so far!
Better prompt adherence than native flow, more motion than previous versions, fastest generation speed. Works great.
I was getting really static i2v results until i used this!! This is so good, the motion is unreal.
RTX 4070 - 3 minute gens with decent quality!!! This is super fast.
Very nice lora!
I use Lightx2 together with Causvid because Lightx2 creates a bit of noise.This can achieve both speed and quality.
Can you specify which LoRA was used and what these two sets of weights are?
HolyShit the new version i2v is FUCKING amazing, now this really feel likes free Kling at Home!!
Hey - do you happen to have a good workflow for this? I' really curious - haven't tried any self forcing before
xuadamux373 https://civitai.com/images/89299184
just save my video, you will get the workflow
ColorWolve You’re the best man, gonna check it out when back from work
Appreciated
xuadamux373 Enjoy~
Yo this is wild, my gens went from 30 mins to 5
Thanks man
xuadamux373 hahaha right, now this what i called kling at home~
ColorWolve why i can't get your workflow on my comfyui?
itasky
https://civitai.com/images/89299184
Save this link video, can’t get the workflow? That impossible.. maybe try update your comfyui to latest version?
ColorWolve yep i move your video into comfyui latest version and nothing happens. i tried this external workflow app and doesn't show any wf when i put your video into it. https://comfyui-embedded-workflow-editor.vercel.app/
itasky i have no idea, i try this website you mention and none of the workflow is working, even the workflow from loras sample... i am using portable comfyui
itasky maybe your setting can only read json, so try this https://drive.google.com/file/d/1rdWzKxAsUdDe8nKF6dXNE8FDbsk7RABB/view?usp=sharing
ColorWolve oh now your video it's working. i don't know how i fixed maybe loading a json i donwloaded in other comment. By the way i am missing the finalframeselector node :/
itasky i think comfyui manager will showing what missing, so you can download from there
ColorWolve yep i am installing from Mediamixer pack. thanks for your support :)
itasky you are welcome =)
ColorWolve wow i reinstalled comfyui on Ubuntu and with your workflow i get the video in 100%|█████████████████████████████████████████████| 6/6 [01:09<00:00, 11.52s/it] :) so much faster than windows (without triton/sage)
itasky hahaha i see, but i am not fans of linux, so for me, windows speed is enough
What scheduler would you recommend?
For using Kijai WanVideo Wrapper I would say lcm and lcm/beta is pretty good at 6-8 steps flowmatch_causvid is a good one at 6-8 steps. flowmatch_distill is limited to 4 steps so it doesn't like any fast movement without being blurry. Test the various dpm++ schedulers and steps, they seem to do great.
LightX2V brough the best quality/detail out of Quantized models when I compared Q6 with the FP8 I use. Without a big hit to visuals, remember that GGUF with similar size to FP8 will take longer than FP8 etc, but GGUF can be very useful if you have limited VRAM and use lower quant models than Q6. Every Quant/low VRAM user just got a big upgrade in visual quality/detail/low step(speed) with LightX2V
Also Kijai WanVideo Wrapper now compatible with GGUF! Another upgrade for quantized models. Have fun!
which GGUF are you using if I don't mind asking
not the OP but personally I just use City96's Q3_K_M on a 3080 and 4090 rig. Quality is absolutely fine, and the generations are fast. umt5_xxl_fp16.safetensors for the clip.
DJLegends Depends on what works best for your setup.
Q6 for instance can be more resource intensive than a 14b FP8 model. If you can Run Q6 then switch to FP8 with quantization and you will be better off with speed, stability, VRAM usage.
GGUF model have to be unpacked as they generate which cause them to take longer, but if the Q model is small enough you can negate the extra time it takes. I use FP8 on 12GB VRAM card no issues, just can't do 720p 81 frames. Can do 576p(1024x576) 117 frames no problem and the quality/speed with LightX2V is great. (using Kijai WanVideo Wrapper)
On i2v I notice if you go high frames around 117 WAN will use your image as a last frame as attempt to loop another generation beyond its limit.
Do you use the regular Wan 2.1 model with this lora? I'm very new to V2I and it doesn't work as good for me, as it does for others, it seems
Very great lora. Thank you for sharing. 45 minutes down to around 5 minutes!
Is there a way to avoid the constant talking and overall ruining of a characters face with the self forcing lora?
NAG and it will follow negative prompts even with cfg-1 (at negative write what you don't want at video)
flo11ok874 do you have a workflow or a guide for using NAG?
BinaryBottleBake It's easy - add 'WanVideoNAG' node with default setting. Put model dot to shift and sampler and conditioning dot to both prompt nodes. Or look at https://civitai.com/models/1736052?modelVersionId=1964792 there is NAG included. It's great workflow btw, just replace old Lightx2v Lora with this new one.
You can also keep faces more stable using this Lora: https://civitai.com/models/1755105/wanfusionxfacenaturalizer
this works great, i definitely have more movement now!
but it's still not enough sometimes. if i increase the lora strength to 1.3 or more the movement increases as well but the quality takes a hit.
i'm thinking maybe use a high strength value for the first 1 or two steps and end up with a strength of 1 for this lora, but i don't know how to do that or if it will help
Instead try upweighting your prompt. You could also try using NAG to use negative prompts.
Ada321 yeah but nag increases inference time, im trying to reduce it as much as possible.
i actually just did something simpler, just put more weight in the parts of the prompt that describes movement, it helps a lot
Try out the new Pusa Lora in addition to this at 1.4 strength. Helps with motion according to multiple people and i can confirm that after a few first tests.
marqs89 where is the lora,can you provide a link?
marqs89 i think it doesn't work.
i get this message in console
lora key not loaded: blocks.0.cross_attn.k.lora_A.default.weight
lora key not loaded: blocks.0.cross_attn.k.lora_B.default.weight
lora key not loaded: blocks.0.cross_attn.o.lora_A.default.weight
lora key not loaded: blocks.0.cross_attn.o.lora_B.default.weight
lora key not loaded: blocks.0.cross_attn.q.lora_A.default.weight
lora key not loaded: blocks.0.cross_attn.q.lora_B.default.weight
lora key not loaded: blocks.0.cross_attn.v.lora_A.default.weight
lora key not loaded: blocks.0.cross_attn.v.lora_B.default.weight
lora key not loaded: blocks.0.ffn.0.lora_A.default.weight
lora key not loaded: blocks.0.ffn.0.lora_B.default.weight
lora key not loaded: blocks.0.ffn.2.lora_A.default.weight
lora key not loaded: blocks.0.ffn.2.lora_B.default.weight
lora key not loaded: blocks.0.self_attn.k.lora_A.default.weight
lora key not loaded: blocks.0.self_attn.k.lora_B.default.weight
lora key not loaded: blocks.0.self_attn.o.lora_A.default.weight
lora key not loaded: blocks.0.self_attn.o.lora_B.default.weight
lora key not loaded: blocks.0.self_attn.q.lora_A.default.weight
lora key not loaded: blocks.0.self_attn.q.lora_B.default.weight
lora key not loaded: blocks.0.self_attn.v.lora_A.default.weight
lora key not loaded: blocks.0.self_attn.v.lora_B.default.weight
the complete message is too long, it goes from 0 to 39 and the generations looks the same, i guess it's not loading it at all.
i'm using wan i2v 480p GGUF Q8
Wow that's incredible. Works like a charm really
can you share the new workflow that you talk about without 2 samplers?
I keep getting outputs that try to form a loop. IE if I start with an image of someone sitting on a chair with the prompt "They stand up and walk away to the right" the result will be them standing up, stuttering a bit, and then sitting back down close to their original position.
Using wan Q8 ggufs, tried with 480 and 720p with only this lora. Otherwise it's pretty much the default i2v workflow
I don't know who's behind this lora but thank you so much. It works great. However, there is still a “slow motion” effect in some results, but some keywords or different lora may help.
I am wondering, Are there any chance to release for ''720p i2v'' version? It still works but it doesn't understand the prompts like the 480p model. At least not on my test.
it's very fast, the resolution 480*832 with a 4090 reaches 11s/it (on Ubuntu with triton/sage) but I noticed that if I change the CFG value other than 1 the speed halves. why?
Because that's how the "accelerator" LoRAs work, they achieve the speed up (well most of them at least, eg. CausVid, Lightx2v) by not needing to use CFG (aka it needs to be set at 1). Setting the CFG to 1 will instantly cut the gen time by half (and vice versa, hence you get doubles the gen time). There's one drawback to this tho, which is you can't use Negative Prompt if the CFG is 1, but there's a workaround to this, which is by using NAG (Normalized Attention Guidance). So, there's basically no drawbacks of not using CFG now.
When CFG > 1, the negative prompt is applied. When 1, the negative prompt is ignored, so the speed is doubled. So how do you put the negative prompt? Inject it into the model itself via the NAG node (like Lora). That's the magic.
FYI, my experience is that for AI, Linux is actually slower than Windows. Not much, but noticeable. I tested with Linux Mint, same version of the software, same CUDA version. It was like 5% slower.
Works like charm, good result is almost a guarantee <3
Has anyone else noticed it being hard to get motion on 2d images with this enabled? It works great for realistic/semi-realistic images, but I've had some difficulty getting results in more anime-style 2d scenes. But at the same time, I don't have a huge sample size and might be coming to the wrong conclusions based off some unlucky seeds.
No? All I mostly do is 2d style animations.
Ada321 I'll keep trying, then. Maybe it's another setting somewhere.
try add Anime Style to your positive prompt
Why T2V is better in i2v ? i2v lora is bad
I'm testing it on WanGP, but i keep getting videos with weird lighting that make them totally unusable.. Also, it takes quite a while to generate, i've got 64gb of ram and a 3090, an it still takes like 8-9 minutes. Not sure if i'm messing something up, this is my config: 4 steps, 81 frames, cfg 1, shift scale 8, wan 2.1 720p
It's for 480p wan
where to find workflow?
Download the video example, it contains the workflow.
Maybe you can add Pusa LoRA for the next upload in this series to get all the essentials. Pusa is a great motion enhancer and works well with LightX2V. Will make all your motion LoRA shine.
FastWan LoRA is also good with low steps, also nice for text2image. FastWan is like a much better version of Causvid(Pausvid).
Mix FastWan 0.2 strength with LightX2V 1.0 and Pusa 1.0, good movement result as well increased quality at low steps.
Also you can add TAEW2_1 safetensor if anyone wants to see your generation live preview as it runs. Use this. You can cancel a bad generation earlier if you don't like what you see halfway. Saving so much time, also its fun to watch it play and progress with each step.
Pusa lora is so large 4GB just for a lora 😥😥
fronyax Hmm, didn't realize it was that large, dang. You will have to work with what you can. I always go over the vram limit and work backwards from there until the dang thing works, you don't know if you don't try. With the blockswap I think about 16 - 17gb checkpoint model is the limit for 4070 Super 12gb cards.
The only parts that takes long is the fp8 model offload and the tile decode, the generation is "normal" speed. Sage makes it faster for sure.
I have to avoid the Pusa scheduler as it takes takes up too much resources and doesn't look as good. I use flowmatch_causvid scheduler with good results 6-10 steps if I like what I see with 4 steps.
I don't know how much 40 blocks is equal to in terms of memory, but that is the limit, more system ram won't do any good for the model part. More ram is great for offloading the other LoRA, encoders etc, and large upscaling.
--The Nunchaku team may be nearing completion in the coming months with their Nunchaku WAN 2.1 scheduled for their 0.4 release roadmap. Could be interesting, we'll see how accurate it is with the Nunchaku optimization. I do enjoy Nunchaku Flux and Nunchaku Kontext+Turbo LoRA so am excited about that. Even with Wan 2.2 out having a blazing fast low VRAM 2.1 is very good. Maybe the work done for 2.1 can easily apply for Wan 2.2--
hi, i don't find any information on how to add TAEW2_! and preview the generation, can you help me?
Pusa + this lightning lora + Fusionx is great
vAnN47 https://huggingface.co/Kijai/WanVideo_comfy/blob/main/taew2_1.safetensors
For the live playback of your generation. Put this safetensor file in your "ComfyUI/Models/vae_approx" folder
In your comfyui settings, go to VHS, at the bottom you toggle on the "Display animated previews when sampling"
Hope this helps.
Hi ,
Thanks for amazing loras.
I dont know if you are the one who create finetune or just release but do you plan to release 2.2 self forcing version?
In your newest Wan 2.2 WF you say in the notes that Lora FastWan_T2V_14B_480p_lora_rank_128_bf16 is used but it's not selected anywhere in the WF? Do i need it and if yes, where i have to use it?
when I use it and add a lora that introduces anatomical parts they come in a saturated color tending towards red.. the only way to reduce the effect is to lower lightx lora from 1 to around 0.6 but this will cause the image to lose quality :(
I use Euler Beta and lightx strength 1
I'm trying to run this on a 3080 Ti. I am using the Q4_K_S quantized WAN model because I only have 12GB VRAM. Trying to run this with the rank 128 Self-Forced LORA and Pusa enabled results in absurdly slow generation, like 11 minutes for a single step. I used the workflow from one of the videos you posted.
I am very new to this, but I have a similar GPU with 12GB VRAM. Q4 is too high, you should step it down to the Q3_K_M version, it works well for me.
tdougherty350505 It works with the native flow though. I ended up doing some research on this and I found a Github issue where Kijai talks about this. By his own admittance, the native nodes are much better at memory management, and they do it automatically. I'd rather not give up quality, so I'll continue using the native flow. I encountered another problem with this flow; increasing the block swap did get it to work, but the entire video is covered in what I can only describe as an orange filter. I decided it was not worth the effort to tinker with for now.
tdougherty350505 i have 3060 12GB and Q4_K_S 4steps total = 120 sec for 480x320
gambikules858 That's cool, but I want to generate in 6 and even 8 steps. It makes a huge difference in the quality of the motion
It's a moot point now anyway. I've decided that if I'm going to spend a lot of time on this to buy a better GPU. I can see the potential
Details
Files
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
Mirrors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
2_t2v_lightx2v.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
wan21-lightx2v-t2v-14b-cfg-step-distill-v2-rank64-bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16_.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16.safetensors
t2v_lightx2v_low_noise_model.safetensors