2.2
For I2V this motion helper node is extremely useful:
https://github.com/princepainter/ComfyUI-PainterI2V
10/30 High lora was further refined.
New I2V 1022 versions are out. They have by far the best prompt following / motion quality yet. (The lora key warning is fine, it just contains extra modulation keys that comfyui does not use. It does not matter.)
https://github.com/VraethrDalkr/ComfyUI-TripleKSampler
T2V versions just got updated 09/28. Probably still best to use a step or two with cfg / without the lora to establish motion with high noise as usual like:
2 steps high noise without the low-step lora at 3.5 CFG
2 steps high noise with lora and 1 CFG
2-4 steps low noise with lora and 1 CFG
Its definitely a big improvement either way.
T2V:
Using their full 'dyno' model for your high model seems best.
"On Sep 28, 2025, we released two models, Wan2.2-T2V-A14B-4steps-lora-250928 and Wan2.2-T2V-A14B-4steps-250928-dyno. The two models share the same low-noise weight. Wan2.2-T2V-A14B-4steps-250928-dyno delivers superior motion rendering and camera response, with object movement speeds that closely match those of the base model. For projects requiring highly dynamic visuals, we strongly recommend using Wan2.2-T2V-A14B-4steps-250928-dyno. Below, you will find some showcases for reference."
2.1
7/15 update: I added the new I2V lora, seems to have much better motion than using the old text to video lora on a image to video model. Example is 4 steps, 1 CFG, LCM sampler, 8 shift. I uploaded the new version of the T2V one also.
I'm also putting up the rank 128 versions extracted by Kijai, they are double the size but are slightly better quality.
I suggest using it with the Pusa V1 lora as well, it seems to improve movement even more: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Pusa
No need for 2 sampler WF anymore IMO. Just plug it into your normal WF with 1 CFG and 4 steps or so. Could prob sharpen it with another pass if you wanted but it no longer hurts the movement like before for image to video.
Full Image to video Lightx2V model: https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/distill_models
Old:
lightx2v made a 14B self forcing model that is a massive improvement compared to Causvid / Accvid. Kijai extracted it as a lora. Example above was generated in about 35 seconds on a 4090 using 4 steps, lcm, 1 cfg, 8 shift, still playing with settings to see what is best.
Please don't send me buzz or anything, give the lightx2v team or kijai support if anyone.
Description
FAQ
Comments (142)
Holy crap, this actually works. It turns a 20 minute creation process into 3 minutes. No visible quality loss. Incredible.
Which sampler and scheduler did you use?
@Hikarias flowmatch causvid for the scheduler, but I don't even see a setting for sampler.
Woah that's awesome! How many steps and did you use 720p or 480p model?
@Catz 480. I just used the workflow embedded in the mp4 in the examples in this lora page. Steps is kind of odd. After you push steps up beyond about 8, the time taken flattens out to about 4 minutes for me, and doesn't get longer. It also doesn't seem to have much effect, so maybe there's an upper limit of some sort.
OK. Try switching your scheduler to unipc/beta. large improvement in motion.
By finetunes, are you referring to vace / fun? I was unaware there were any community ones.
How's the lora combination support? Fast / lightning have always given me grief with those.
And the samples don't really seem to be doing much for wan vids, not the best presentation frankly; if you could near replicate some existing published prompts it might be more enticing.
There is:
Moviegen:
https://huggingface.co/ZuluVision/MoviiGen1.1
Wan fun control:
https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-14B-Control
SkyreelsV2:
https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P
VACE:
https://huggingface.co/Wan-AI/Wan2.1-VACE-14B
And hopefully soon this animation finetune:
https://huggingface.co/IndexTeam/Index-anisora
Works fine with loras.
And I just grabbed some random images to use for image to video real quick from civitai, any suggestions?
@Ada321 Hi. Do you know what I2V and Image to Video models exist?
Most of the ones I know are text to video only.
@Ada321 Do some of the portrait talking one face detailed and hand is something to worry when doing fast
@Ada321 Hmm, hadn't heard about moviegen / animation. Too bad it doesn't seem to have booru tagging, natural language word vomit is tedious.
In that case, this guy made some awesome sfw previews (couldn't swing a dead cat around this place without hitting a dozen crazy nsfw samples), civ's metadata is lacking but the workflow appears to be attached in full to the vids: https://civitai.com/models/1525175/wan-i2v-skyreels-i2v-morphing-into-plushtoy-trained-on-sr-v2-i2v
Nice thanks, can work with skyreels v2 I suppose?
Does anyone know if there are differences between the Kajai models here: https://huggingface.co/Kijai/WanVideo_comfy/tree/main
and the ones provided by Comfyui here?: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/diffusion_models
The causvid ones are the same.
This also works with native comfy nodes/gguf by using the beta scheduler. So you don't even need to use the wrapper. (originally said AYS, but beta is even better and works with 2 steps)
Which model type, SD1 / SDXL / SVD? Or are there new ones?
where do you get the align your steps scheduler
@crombobular Comfy core. Search alignyoursteps.
@firemanbrakeneck found it though it just produces garbled messes with my workflow, do you have a working example?
Scratch that, beta scheduler is better for this. I got nearly the same quality results on 2 steps beta as I did on 4 steps AYS.
@firemanbrakeneck In KSampler? allignyoursteps isn't in mine, I checked the samplers too just in case. I havee karrass, sgm, the usual.
@boz255 It's samplercustom. But you should use beta instead, it's even better, beta is built in to KSampler
@mylo1337 can you share your workflow? i keep trying but mine gens a jumbled mess of pixels and colors
@ptrprkr I'm just using swarmui, and I'm doing i2v. 1 video cfg and 2 or 4 steps.
@crombobular I too am getting little but noise on ays / beta, with 0.5/1/1.5 causvid weight, 8 steps, 1 cfg, 6 shift.
@boz255 It's a different node called "AlignYourStepsScheduler", which outputs sigmas, like basicscheduler. But it's no good for me.
I would also love to see a workflow, pretty please? :)
For those with jumbled messy generations on subsequent tries from the 2nd or 3rd i2v, try disabling skip layer and teacache optimizations if that helps, it worked for me
i still get super messy generations even with those off. i guess this is better for i2v than t2v. am also trying to use it with native instead of kijai so that prob doesnt help
Definitely seems to speed up generation but I can't seem to mix other lora's in with it and there's a noticeable drop in quality (I use GGUF though). It seems very likely my workflow just doesn't work well with it anyway.
how you made it work with default nodes in the first place?
@TurboCoomer my workflows are on any of the more recent videos I've posted if you want to check for yourself but I am using some custom nodes through comfyui manager. All I did was just add in the lora, lowered the steps a little and tried both the normal (what I usually use) and beta schedulers
Other loras seem to work fine with it, it works best with action / motion loras I would say. If you lose too much motion then turn the weight of this lora down and increase steps by just a bit. If you use regular schedulers then turning shift up as well can help.
And and make sure not to use teacache or stuff like that.
what is this i am using wan 2.1 i2v is this gona helpful for me ?
It's the equivalent of fast lora but for Wan (fast lora was on Hunyuan) you can reduce the number of steps significantly while still maintaining quite decent quality so depending on how it work on your workflow, it can be a great time saver basically, and it seems to work decently with I2V yes
There's a discussion on reddit of people's test results settings - good as 2nd reference:
https://www.reddit.com/r/StableDiffusion/comments/1knuafk/causvid_lora_massive_speedup_for_wan21_made_by/
I've seen noticeable degradation in subject's motion/movement quality with this LoRA. Other than that, it's a godsend.
For more motion turn down this loras weight a bit more, you might have to add another 2 or so steps to make up for it. Or use a lora with motion trained into it. Its for sure best used with action loras.
Good for 'live wallpaper' style (limited) motion so far, quite impressive in fact, and fast thanks to the vastly reduced steps requirements, I will test more classical movement later, and I feel it will not be the same deal, but still, pretty good
It for sure works best with action loras which tend to override motions anyways. That said for just wan by itself you can turn down this loras weight a bit and add another few steps in exchange for more movement. You can also use a different scheduler and increase shift. You can also play with turning CFG back on just a bit.
Hopefully this will be updated, to help with the movement. If/when it does, it will be a massive game-changer imo, as it literally cut the gen times for me by easily 75%, while keeping video quality to a high level. I cant use this as is, its almost I2I for me 🤣
For movement either use loras based on actions / movements or decrease its weight a bit and increase steps a bit more. Also could use regular scheduler then increase shift a bit. You can also play with turning CFG back on just a bit.
I had recommended 0.5 BUT that was with action loras that imparted their own movement, using it without movement from loras I can see what people mean though with weight that high.
@Ada321 Aye mate, i read thru all the other comments where you said about those things...i tried them all. Strength from 0.5 down to 0. Ive tried a few different loras (sex themed..dont judge lol). tried like ~7 different schedulers. even tried changing cfg, which was bad idea! Oddly for me tho, i gained movement with LESS steps, but when i get down to 4, the video started to degrade alot. Thx for this tho buddy, im sure everyone appreciates this, and im sure some peoples setups work better than others 👍
@compo6628585 "i gained movement with LESS steps," Sounds like its overcooking it, / the weight is too high. Give me a bit, Ill try to find a better recommendation for using it without motion loras.
Edit: While using a motion / action lora still works best injecting noise might also help. I was also playing with starting it without the lora for like 3 steps then doing the rest with it which works well but cuts the speed up in about half.
Ok, reduce its weight to 0.3 or so, switch to unipc scheduler and use 12-15 steps. (Note that the flowfield_causvid scheduler limits itself to 9 steps max.) This should fix the movement for when you don't have a lora with motion trained into it.
@Ada321 Thank you soo much for taking the time for testing my friend! These new tweaks have helped so much. Youve aced it!
I tested using LORA, and it was possible to generate videos of the same quality as without using LORA, with about half the number of generation steps.
If you can reduce the number of generation steps, not only will you be able to shorten the generation time, but you can also reduce the amount of VRAM used, so many people can benefit from LORA.
Some observations after some short tests in 1 hour:
1. On native comfyui wan workflow it works with unipc sampler/beta scheduler, seems also works with gradient estimation sampler.
2. It can be used with other lora! Splendid!
3. Noticable speed up on fp8 safetensor for both 480p and 720p. For 480P, a 30-50 frame video can be generatee under 1 minutes. For 720p, a bit longer but the effect is still huge compared to 10mins per generation without the lora.
4. Does not quite work well with GGUF, at least in my test.
5. Quality loss is acceptable, but it depends on the subject I guess. For video with more static object/limited subtle motion/very linear motion/object barely change these kind, the quality loss is not that obvious. But for larger motions it does have more impact.
5. Really see great potential in this lora and technique......This can bring video generation speed to image generation level.
Get bad result with i2v, probably my workflow, is it possible to have a link to a good i2v workflow ?
@Ada321 thanks!
@Ada321 I can't download it
@Ada321 Thanks!
this changes everything for me
Tested around and it works great with 25 frames (1 sec) and is very fast (needed 43 sec for generation)but as soon I use 97 frames its very slow (running more than 40 minutes).
Triton is also activated and I have a 4090.
Just sounds like your overflowing from vram to ram.
@Ada321 Thanks, not sure what to do now but I will look into this topic!
Some numbers using an RTX 5070ti, 720x480, 8 second video duration.
Using this workflow https://civitai.com/articles/13328 the low-vram gguf version 3b,
Models: WAN2.1 i2v Q5_K_M.gguf, Clip umt5-xxl-encoder-Q5_K_M.gguf, with automatic prompt of florence2 enabled.
Enabled optimizations: speed regulation, CFGZeroStar, temporal attention, skip Layer, TeaCache (0.19), long video patch (5s+), TorchCompile, sageattention (native, without the node in the workflow, using "--use-sage-attention" launcher flag. Using triton-for-windows (haven't got triton native windows to run yet))
Settings without causvid lora: no lora, CFG 4, steps 20
Duration total: 490,67 seconds, second run 480,99s
Causvid:
Disabled: skip Layer (will create very noisy output if enabled together wit causvid lora)
Settings with lora: lora weight 0.3, CFG 1, steps 15
Duration total: 271,68 seconds
I only did a few test runs, so I can't conclude a reduce of motion yet, as it might as well be seed variations. I did use 2 lora in my testing for motion as well, so keep that in mind, you might render even faster without lora and notice motion reduction. This comment is mostly about speed comparison :)
care to share the workflow? If you still have it.
please2000 basically this workflow for the most part. https://civitai.com/articles/13328
I don't know if my conclusion in the original comment is still relevant. Light2x lora 4 steps works really well. lora weight 1. rest as regular wan2.1 workflow. Set Teacache to disabled and weight 0.01 else it might bug out every 1+ generation.
can it run with GGUF model? :)
Absolute game changer. SageAttention + CausVid got me 5 second videos made in 240 seconds.
Setup was the following:
4090RTX
Wan 2.1 480 14b FP16 (comfyOrg)
SageAttention
CausVid 0.3 strength
CFG 1
Steps 14
3 Other Loras
(No Teacache, it fucks quality up bad for me)
So how exactly does this lora work?
How is this even possible?
It barely even has quality loss.
I use causvid at 0.5 strength with beta scheduler, 4 steps, 2 cfg. Other lora(s) like normal. It has nearly the same quality as 20 steps without the lora. It made local gens much more capable lol.
It is just a speedup lora, like lcm, hyper, turbo, etc. But for videos, it was distilled to retain quality at low step counts. I've gotten good videos on 2 steps, great videos on 4.
The reason teacache doesn't work well with the lora is because teacache is used to skip steps, but causvid completely changes how steps are paced, so teacache will almost always skip the wrong steps. Even then, causvid works with 4 steps already, so teacache would be pretty much useless there anyway.
The causvid 14b lora works better than the causvid 1.3b lora when inferencing wan fun 2.1 v1 or v1.1 makes gens alot more temporally consistent and enhances motion a ton 0.3 strength unipc scheduler 15 steps
@mylo1337 its improved all gens both high and low step for me its quite good
@mylo1337 thank you so much for the detailed explanation. Would youn mind explaining to me how the settings effect the generation with the Lora.
Like what does high strength vs low strength do for the casuvid Lora?
Does higher = faster or more quality motion?
How does CFG effect quality and speed.
I'm just curious how I can tweak it, if I ever find myself having trouble with a gen. Like if I wanted to sacrifice some speed for quality.
How do you combine Causvid with the other Lora? rgthree PowerLoraLoader? Is the Causvid at 1st position or does it not matter?
@gman_umscht The position of the lora doesn't really matter. It's like comparing "1 + 2 + 3" with "1 + 3 + 2", you get the same result.
You just gotta have causvid loaded onto the model somewhere, doesn't matter exactly when, as long as it's before the sampler node
@LatteLeopard In my testing, the lora's strength being too low causes blurryness at low steps (similar to no lora) and setting it too high can cause it to kind of fry the videos (similar to high cfg).
The sweet spots depend on the sampler, if you want something fast, 4 steps with beta scheduler (and a sampler like euler) can get good results at 0.5x strength.
I've seen people use unipc simple for 12 steps as well.
I use a cfg of 2, I don't know how much it affects movement though, I believe 2 at least seemed better in my tests than 1 for prompt adherence.
Oh also, too low steps can cause shakiness I usually get a little shakiness on 2 steps and a lot on 1.
@gman_umscht have you success use other lora? I tried several lora and they seems not able to work...
It will lead to an increase in sharpness and saturation. How to solve this problem?
I get better/more natural movements and better promt adherence with higher causvid values ~0.7.
0.1 absolutely destroys movement. 0.3 is mainly lora induced movement, but 0.7 is the sweet spot for me.
Seems weird, since others report 0.3 as good value..
I've been testing a ton as well, wanted to really hone in one a good set of settings before updating but yea, I was getting 0.7 to work as a really nice sweet spot lately, 90% of the motion quality, still good speed at 9 steps, playing with clown sampler to find the best for the job (inject just a little noise through the process to give it even more movement back), I'll try to update soon ish.
And most people use loras that have actions trained in which in those cases don't really suffer from loss of motion quality / prompt following since the lora contains that. I'm testing complicated prompts / motions without lora assistance to find a good set of settings.
0.6-0.7 is also my sweetspot.
at 0.3 i get only lora induced movements.
@Ajaxdiffusion what does lora induced movement means?
Fantastic work!
Do you think it would be possible to somehow merge this LoRA into the Wan base model?
Or am I totally oversimplifying things here? 😄
The lora was extracted from a base model.
https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid
Merging it would just reverse the process, losing a bit of quality in the process since the lora is only rank 32.
workflow for native too?
jsut add this lora thats all
god 100sec 65length 960*544 on 3060 lol (1.3B 0.5 lora, 6 steps , 1cfg). Native workflow
i found, what low cfg washes out liquid projectiles, only way to reduce that is increase cfg to 2, what increases generation time twice (61 frame from ~1:30 to ~3:30 with the parameters below).
For me optimal parameters for 14B:
- lora weight = 0.3-0.5
- Steps = 8-10 (later I'm doing a second pass with a 1.3B model)
- Cfg = 2
- Sampler = uni_pc
- Scheduler = normal\simple (normal I think slightly better).
P.S. additionaly interesting schedulers for cfg = 1:
-beta - creative
-ddim_uniform - even more creative
(Not usable with high cfg)
Try 0.7 weight, 1.0 cfg, 1 shift (this makes it blurry otherwise), 9 steps, euler, beta. This is the best combo I have found after messing with it for hours.
@Ada321, Yeah, starts working better for me with your combo, but 10 steps and 4 shift.(otherwise liquid still washed out) Hard to interpolate to another source scene, need to testing further,anyway, thanks!
hmm i'm currently having an issue in which I have flashes at the begining of my videos with my generated output and idk how to remove that correctly
video with embedded workflow here: https://files.catbox.moe/kkdzs0.mp4
huh looks like your newest example also have that "flashing" issue at the beginning of video
remove teacache, gonna update it myself
我偶尔会遇到更严重的闪烁问题,在我这里是因为CausVid lora的权重过高引起的(>0.5),所以第一个可能的修复方法是减少CausVid lora的权重至0.5以下;第二个方法你可以继续使用高权重的CausVid lora,但使用KJNodes里面的Get image or mask from batch剪掉最初的几帧。
@Ada321 i actually not running teacache only sage attention
I think that happens when you choose a non-supported resolution on i2v. (anything other than 480x832 or 832x480)
以下配置为我测试的3步最佳设置:
steps = 3
sampler = uni_pc
scheduler = kl_optimal
lora weight = 0.7
checkpoint = Vace_14b_Q4_K_S.gguf
注:使用KJNodes的Get Image or Mask Range From Batch去掉最开始的10帧(当lora weight >0.5时最开始几帧会有闪烁),但是即时浪费掉10帧,这个设置的速度依然足够快!
3 steps did not work well for me but this works great at 15 steps
@Ada321 或许关键是使用vace模型?我刚发布了两个使用vace14b+causVid制作的视频,里面包含了完整的参数,供你参考它的质量和设置。https://civitai.com/posts/17273283,https://civitai.com/posts/17274126
Try the latest workflow I posted. 0.7 weight, uni_pc, kl_optimal, 1.0 cfg, 15 steps, 1 shift, the woman shooting a gun also contains the WF, it is using a custom node though, can just reroute stuff if you don't want it.
@Ada321 good, but slower than unipc simple 12 steps 5 shift on other workflow and have the +- same results
Do I save your custom node as a .json file? I tried that and dropped it in custom_nodes but it doesn't detect anything. does it need to be in a subfolder in custom_nodes?
It needs to be a .py file
I did realize that, sorry haha. I'm still not sure where to save it in custom nodes though... ComfyUi can't seem to detect it.
@kurtast88942 How odd, maybe try adding a blank _init_.py file into the custom nodes folder.
That did it! Thanks so much! I appreciate all the work!!!
I'm testing the brand new workflow and it works fine, but I'm seeing very slow speeds for having a mobile 5090. With Sage Attention enabled it takes about 4 1/2 minutes to render 5 seconds of video using the default settings of the workflow other than me changing the resolution to 416x608 (Half of the image size). Without Sage Attention it's over 5 minutes. I know the mobile 5090 is quite a bit slower than the desktop version, but roughly the same speed using another workflow without causvid but utilizing teacache and to my eyes the quality looks better. Anyone else faced with this? I feel like I'm missing something.
Upon googling it the mobile 5090 seems like about 25% slower than a 4090 which takes me a bit less than 3 mins to gen 15 step 640 x 640, 81 frames on with fp8 fast and torch compile so that sounds about right.
@Ada321 Ok, that's good to know. I wonder why my teacache enabled workflow without causvid is about the same speed but with better quality. Thanks for your work here though. I've been learning a lot.
@poondoggle even if you use the same number of steps this should be x2 a fast due to not needing cfg. 15 steps is about half or less what you normally need without it for decent looking gens though so its more like a 4x speed up. Compare with the same res / frames.
So I really like this, I've been using it in T2V 14b and after a while I realized it is actually doing something that I can't figure out how to reduce.
So when you run teacache into skiplayerguidance and turn that skip up high it creates a kind of stronger outline around things/ contrast. Now I have the model loader running straight into the shift and have bypassed all that and this lora causes that exact same thing except to a more severe degree. I've noticed it here too with peoples generations, a lot of my videos seem to put the subject ontop of the environment too, as in they feel like two separate things, it will generate the background env and then the person and they wont really fit together as well so I'm not sure how to remedy this. I've tried shift 1~3~5. cfg is 1 ofc, different samplers, multiple generations at different steps but it seems to happen no matter what.
I just thought id report my experience if it may help you in some way. Still think its really cool!
Its also making a lot of my subjects stay still sometimes too. strange
Don't use teacache with it, I noticed a huge degradation.
@Ada321 yeah theres no teacache being used at all once i realized that. the problem is it also keeps generating the exact same person even with seeds on randomized due to cfg being 1, it just pulls from the dataset the lora was trained on. I think for t2v it might need some tweaking or a way around that
I've been trying to troubleshoot this on my own, but I'm honestly out of ideas at this point.
I'm using a Vace14b + Wan T2V setup, following a workflow (strongly similar to this https://www.youtube.com/watch?v=3tu-sTY0k6M) that involves masking objects with SAM and using a reference image along with some VACE magic to replace those objects. While the overall workflow is pretty solid, and causvid speed gains are incredible, I'm running into a recurring issue when using it. I sometimes need higher causvid strength to "help" in replacing the object, however higher strength casues some issues.
Specifically, the video quality drops noticeably—to the point where faces, both in the background and sometimes even in the foreground, get heavily distorted. Details like teeth just become a white blur, and the overall video looks like it's been hit with a bad compression filter. I don't run into this problem when using something like 20 steps with DPM++, the non-masked part of the video looks exactly as in the original, but any CausVid attempts, regardless of sampler or step count, consistently introduce these artifacts.
Has anyone else experienced this issue with CausVid? Is this just an inherent limitation of using it, or is there a workaround (maybe some scheduler/steps combination I didn't try yet) to preserve quality, especially in non-masked areas of the video?
There was no issue when I used this LORA at 0.3, but when I use it at 0.5 to 0.75, the video shows strong contrast. I used the usual I2V workflow — could something be wrong?
Did you use my workflow?
@Ada321 Is this workflow: https://files.catbox.moe/rz55fd.json the one with the 2 samplers? I cant figure out how to link 2 samplers together
Very nice lora! It even improved my training lora.
But for some reason 1-2 frames will blur for a while.
Used your workflow and everything works well. However, when I try to add other loras, seems they just not working, no matter how I adjust the step and lora weight. Is there anything I missing?
been using i2v model, 3~4steps, causvid 0.3~0.4, cfg 6~7 unipc simple, and then 2steps causvid 0.4~0.5, cfg 1. It preserves lora motion and sharpness.
kl_optimal doesn't work for me. didn't use model sampling sd3.
You're right. That two step workflow is the best I've seen so far.
Would you mind sharing your workflow?
@tywho comfyui example wan i2v. and than two ksampler just like sdxl refiner. You can check comfyui example sdxl refiner to see how to use two samplers. I'm using default fp16 on 3080 10gb, can run up to 480p wide 5s or 720p wide 3s.
v2v like inpainting or controlnet doesn't require two samplers, just cfg 1.0 and 3~6 steps is sufficient.
Can't find CausVidControl.
ComfyUI-WanVideoWrapper is already on nightly :/
You need to add the custom node mentioned in the "New edit" part of the description of this model. It's this one: https://files.catbox.moe/1ff7xc.py
@nrocka i didnt think that was it. thanks
In theory this is great, in practice, not so much. It kills the video part of the video lol. I'll keep trying it with different Loras and settings, but Im not impressed. I don't want "videos" with little to no motion. That's a picture.
Use it a lower weight with cfg, then you can have the exact same motion with about half the steps, the quality loss is near imperceptible. Also a 2 step workflow fixes the issue with even less steps. Do about 4 steps with full cfg at 0.3 weight or so, then do a 2nd pass with 0.5-0.7 denoise for another 4 1 cfg steps.
@Ada321 Can you please share the workflow for us people who aren't good at comfy? It's just a drag and drop for you, but a lot more work for us. Thanks
@stringieee968 Its the workflow link, try 0.6-0.8 denoise on the 2nd sampler.
is there any 2 sampler workflow example with wrapper, not native nodes?
Just add a second one yourself. Connect it exactly like the existing one but connect latent_image of second one with LATENT from first one.
@marqs89 I can't get this to work and I've tried the advanced sampler. Loads of burn in or corruption.
Am I crazy or does the workflow below contain only one sampler instead of 2 like the text above indicates?
you are not crazy, it does only contain one
Just add a second one yourself. Connect it exactly like the existing one but connect latent_image of second one with LATENT from first one.
@marqs89 Could you send me the workflow I can’t figure it out sorry
@Yourmomd https://pastebin.com/iG6jLdD4
@funscripter627 The link below is incorrect (outdated), use this https://files.catbox.moe/rz55fd.json
what a mess. how about a workflow without all the python coding nonsense that doesnt work anyway.
how about official workflow + this lora ?
"Python mess" You literally just drop the file into a folder but you don't need the node, its just to put all the settings in one node instead of having to change image size / seed / steps in several nodes every time. I updated back to kanji where he has custom nodes for the purpose for the posted workflow now if you don't want to do that.
Details
Files
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Mirrors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.