Go here for Wan2.2 WF ≈➤🍥 Wan 2.2 (GGUF) [i2v / FFLF] + [t2v] Workflow
Thanks to @definitelynotadog for Dual KSampler Worflow.
Thanks to @Ada321 for uploading Self-Forcing(Lightx2v) Lora here on CivitAI.
*25/7 Updated 🟨 Dual Sampler Workflow to V3 (Updated this in case anyone wants to try, its slower and slightly more complex - nothing change for the base structure)
*24/7 Added 🟩 Single Sampler Workflow V3
Added Video Preview for all WF
Added Post Processing Section for Expanded & Compacted WF
Shifted some inputs to Post Processing Section for Expanded & Compacted WF
Seed node in Expanded & Compacted WF is set to "New Fixed Random" by default.
Swap VRAM-Clean Up to Easy-Use Custom Node Clean VRAM
Added Icons, adjusted some Visuals and edited+added some Notes
Minor visual adjustment in Interpolation+Upscaler WF
Post Processing Section (Expanded & Compacted):
When using "New Fixed Random" in the seed node, after a video is generated, we can change/edit inputs or choose another selections of option and generate again while skipping Sampling process/steps.
Example: Let's say you generated a video by clicking on the ComfyUI RUN button and you forgot to set the Interpolation Multiplier or choose Interpolate + Upscale Options. Once the video have finished generating, you can change and set the Interpolation Multiplier and/or select Interpolate + Upscale option and click ComfyUI RUN button again which will skip the Sampling process/steps because you are using the same seed number and Not change any inputs that is not in the Post Processing Section.
To generate a new seeded video, click on "New Fixed Random" in the seed node and click on the ComfyUI RUN button.
This method also speed up drafting for videos, so you don't waste time interpolating or upscale the video you might not want.
Small written guide in WF.
17/7 - Added 🟩 Single Sampler Workflows V2.1 to use with the new Self-Forcing(Lightx2v) Lora.
Single Sampler Downloads Contains the following WF:
Expanded (Shows all connections - mainly for learning + exploration)
Compacted (Only show essentials and hides everything else)
Simplified ("Standardized" WF for those who prefer to have more control over inputs)
Interpolation + 2x Upscaler (Useful when you want to generate a lot of videos until you get the desire one and interpolate+upscale later. )
Joiner/Merger (Joins 2 videos together)
This Single Sampler workflow includes:
GGUF Loaders for Diffuser Model + Clip (Best to use GGUF Model + Clip together)
Block Swap for memory management
Sage Attention (Speed up Generation Time) - I forgot how I managed to install this.
Torch Compile (Speed up Generation Time) - Enable only when your system support it.
NAG (Normalised Attention Guidance) - Adjusts negative prompt influence intelligently when using CFG 1.
Stack Lora Loader
Scale by Width for Image Dimension Adjustment
Video Speed Control
Frame Interpolation for Smoothing
Auto Calculation for Numbers of Frames and Frame Rates in accordance to the inputs of video length, speed and interpolation multiplier
Save Last Frame (For sequencing video)
Color Match (Useful for sequencing video or uniform color through out video)
VRAM - Clean-up
Upscaler (up to 2x)
5min 30sec generation time on my 3090 Ti 24 GB:
720x960 Image - 4 steps - 81 frames (5secs) - 4x Interpolation - Upscaler - GGUF - Torch Compile - Sage Attention
Videos posted above are without speed adjustment.
Includes Embedded Workflow. (Download the vid, drag into ComyfUI)
Links to models/lora files in workflow.
Always download the files from this page as there might be minor updates.
(use Videos posted for settings examples)
I only tested with a few loras and they seem good. Try to use alternative Loras if those that are not working to your desire, and you can always fall back on Dual Sampler if needed. (They are still functional but takes longer time than single). Can also try to lower Self-Forcing(lightx2v) strength.
🟨 Dual Samplers Section:
*V2.1 Minor Update.
Switch VAE Decode (Tiled) to the normal VAE Decode, which is causing "sudden flashing" when video are more than 5 seconds or 81 frames.
Updated CausVid v2 link in all WF or get it Here. (thx @01hessiangranola851)
*If you already using the WF:
experience flashing/sudden brightness when generating more than 5 sec vid. Try setting the temporal_size to 64 in the VAE Decode (tiled) or switch to the normal "untiled" VAE decode.
experiencing grey out first few frames, reduce the CausVid Strength or just set it to 0.3.
*V2 Update
GGUF Loaders for GGUF Diffusal Model & Clip with download links. (Choose either Native or GGUF. Disable/delete the ones not using.)
"Fixed" Torch Compile and add fp16 accumulation options.
Color Match (Useful for sequencing video or uniform color through out video) .
Template for External Video Merger/Joiner. (in expanded version)
CausVid Strength ( Range: 0.3 - 1 )
Minor visual and notes adjustments.
The i2v workflow is build with the intentions in mind:
Build for the use of Self-Forcing(lightx2v) Lora (but not limited to)
For learning and exploration
Experimental purposes
Modular Sections (add, build upon, swap or extract part of the workflow)
Exploded View - to see all connections (In expanded version)
This Dual Sampler workflow includes:
Block Swap for memory management
Sage Attention (switch on if you have it installed)
Torch Compile
Stack Lora Loader
Scale by Width for Image Dimension Adjustment
Video Speed Control
Frame Interpolation for Smoothing
Auto Calculation for Numbers of Frames and Frame Rates in accordance to the inputs of video length, speed and interpolation multiplier
Save Last Frame
Previews for both First KSampler Latent and End Video Images
Dual Sampling using 2 KSamplers
VRAM - Clean-up
Upscaler (up to 2x)
Template for external Frame Interpolation (in expanded version).
Models to use:
The workflow can use either wan2.1 14B 480p or 720p. (i2v).
720p model and higher resolution images are recommended as it gives better quality. Especially eyes and teeth during motions.
Examples:
5 first steps / 3 last step / 81 frames
480p model - 480x640 Image
480p model - 720x960 Image
720p model - 480x640 Image
720p model - 720x960 Image
Generation speed with my 3090 TI 24GB with Sage Attention no GGUF - 5 first steps & 3 last steps (total 8 steps) , 81 frames and 4x frame interpolation multiplier:
720 x 960 Image : ~750 secs 12-13 mins est.
480 x 640 Image : ~350 secs 5-6 mins est.
Note:
Some Loras may distort faces or the character. Either reduce the lora strength or use an alternative lora.
Sometimes you may need to generate a few times to have better motion seed. Be patient.
I did not test every loras, so you will need to test and figure it out yourself.
(480p/720p Model, Image Dimension, Lora Strength, Start CFG)
*If you find the other loras you are using with this workflow are too aggressive. (too much motion, color change, sudden exposure), lower down the Start CFG. Alternate between 3 & 5 to see which is better.
Some videos I posted above used lower CFG because the other loras are too aggressive with high CFG level.
Drafting for motion with other Loras:
Use smaller image dimension for faster generation to see if the lora you use have any motion.
Once satisfied with lora and prompt proceed to use desire image dimension.
Other tips:
You can also clean up distortion/blur by using other V2V workflow.
Like this one:
https://civarchive.com/models/1714513/video-upscale-or-enhancer-using-wan-fusionx-ingredients
and/or use face swap to clear face distortions.
🟨Dual KSampler
Recommended Steps:
5 start steps / 3 end steps (I used the most for testing)
4 start steps / 3 end steps
The old T2V Self-Forcing(lightx2v) may sometimes hinder, slow down or produce less motion for some loras.
In order to get more motion for some loras, you need a higher CFG level of more than 1, but when using Self-Forcing(lightx2v), you need to set the CFG to 1.
So this is when Dual KSampler is utilize.
The 1st KSampler uses high CFG level of 3-5 to create better "motion latent" along with Causvid Lora to increase more motion with lesser step generation.
The 2nd KSampler uses low CFG of 1 for finishing the video with 3 steps using Self-Forcing(lightx2v) lora for speed generation. More steps count will make Self-Forcing(lightx2v) lora to influence the video more and cause lesser motion again.
In order to pass "motion latent info" to the 2nd sampler, the 1st step count has to be more than half of the total steps.
Examples:
5 first steps / 3 last steps / 8 total steps
4 first steps / 3 last steps / 7 total steps
When it is configure in this way, you can see the images start to form in the latent preview:
That is when it can be pass to the 2nd KSampler with "motion latent info" to finish it off without heavily influencing it in low steps.
(If the 1st steps count is half or less than half of the total steps, you will see a very noisy image that does not resemble anything.)
Basically a 7-8 steps generation splits into 2 KSampler.
The 2nd KSampler continue the generation process from where the 1st KSampler left off.
(Using 2 normal samplers will not produce the same results as the 2nd sampler will not know at which step to continue from. It will take the product of the 1st KSampler, ignore what it has produce and start from step 0.)
With the initial KSampler generating at 3-5 CFG, its slower. But the trade off here is to get more motion when using it with other loras. Comparing it to a 20-30 steps with no CausVid or Self-forcing(Lightx2v), it way faster.
Unfortunately, KSampler with Start End Step is only available for Native and not WanWrapper.
Tooooo many GET and SET nodes....
!!! Only available when ComfyUI-Easy-Use custom nodes are installed.
You can utilize the Nodes Map Search (Shift+m) function.
In your Comfyui Interface panel. Usually on the left. Look for an icon with 1 small square on top and 3 small squares below it. It's call Nodes Map.

Let say you see a "Set_FrameNum" node.
And you want to know where the "Get_FrameNum" is.
Enter in the search bar:
Get_FramN....--! Case Sensitive !--
And you will see it filtered.
Double click on that and it will bring you to the node.
Likewise for Get nodes:
Example for "Set_FrameNum"
Search:
Get_FrameNum--! Case Sensitive !--
Filtered.
Double click.
Custom Nodes
ComfyUI-Custom-Scripts
rgthree-comfy
ComfyUI-KJNodes
ComfyUI-Frame-Interpolation
ComfyUI-mxToolkit
ComfyUI-MemoryCleanup
ComfyUI-wanBlockswap
MediaMixer
ComfyUI-Easy-Use (Install manually in your Custom Node Manager)
After notes:
You may build upon, use part of, edit, merge and publish without crediting me.
The reason why I don't use GGUF is because it keeps bricking my ComgfyUI every time I tried installing it.
I do not have more in-depth level of understanding beyond this point.
Description
Added Video Preview for all WF
Added Post Processing Section for Expanded & Compacted WF
Shifted some inputs to Post Processing Section for Expanded & Compacted WF
Seed node in Expanded & Compacted WF is set to "New Fixed Random" by default.
Swap VRAM-Clean Up to Easy-Use Custom
FAQ
Comments (52)
the new v3 single sampling is incredible, seems faster but mostly prompt adherence is through the roof, v2 dual sampling camera motions were well, pathetic, here it's just another level, congrats for this.
All the work is done via LightX2V- the vastly improved go-faster LoRA (or the model with the same functionality baked in). Previous go-faster methods were hit and miss on motion, prompt adherence and ability to work with other LoRAs.
Anyway, hate to burst your bubble, but Wan2.2 is almost here, so the entire fine-tuning cycle is about to begin all over again. Except this is good news, for it will be a matter of one step back and then soon two steps forward.
Lol if you like this stuff the next big thing is always around the corner, that doesn't want that nowadays this is the best wan workflow imho.
i'm confused, why is there no indication on where to paste thos files?
By files what do you mean ? Model, clip, Lora ?
Lannfield How do choose models? Is there any instruction at all? all the inputs are converted into some kind of inputs.
I also am confused on the whole embedded workflow contained within the images or videos. I would really like to try these things out without having to build it from scratch and while what you are saying about dragging an image from a different workflow of yours I tried and now dragging a video into ComfyAI from this workflow zipped folder to load the workflows, as opposed to just giving the .json file, my version of Comfy does nothing but cross out the image or video in transit, while being dragged, and does not allow it to be loaded into anything or dropped anywhere. Since I am new to this and the common Windows drag and drop procedures seem to be blocked everywhere, I am at a loss as to how this works. I am also using the Windows installed ComfyAI as that is how It was presented to me on their site. So I'm not sure if the GIT version has different features or abilities that don't translate over, but the Windows Version seems limited to Nvidia and late to get updates and might have security features that are overkill who knows.
I have yet been able to get any of your recent workflows to produce anything, or even run.
However, both recent workflows from definitelynotadog work flawlessly. Also, in terms of "compacted" and "simplified, his are. These,,,, are far from either of those words.
Simplified, or compacted, or both would be drop an image in it and run it, like his, and get great results with zero headaches.
I've tried to figure this mess out to give it a fair chance before I made this comment, but nothing works so here I am.
did you get an error or you cant generate again?
i recently set the seed node option to "New Fixed Random".
To generate a new seeded video, click on "New Fixed Random" in the seed node and click on the ComfyUI RUN button.
If you prefer the traditional just click run and generate, set/click it to "Randomize Each Time" in the seed node.
sorry i probably should have worded those wf a little different. Compacted and Simplified wf are derived from Expanded. Compacted just hides everything and only show essential stuff with less dragging around the wf (which i personally prefer to use) while "simplified" wf strip away a lot stuff to make it more of a "raw" wf that some prefer to have more control over most inputs, does not mean a simple workflow.
Lannfield it is basically 100% inoperable. Both of these recent dual sampler and single sampler workflows.
I know my hardware isn't an issue because I run both of definitelynotadogs workflows frequently in their stock form with BF16 Wan.
I've studied yours to try and find something I'm missing, without changing anything and they just don't run. They get to the Ksampler and it presents a memory issue.
And that's only trying to make a 5 second long, 848x560 video, from 3648x2496 images.
Not finding the magic button to solve the issue anywhere. His workflows are the prime definition of simple, no hidden magic buttons like these. I just don't get it.
jb8892 if hardware is not a prob, the only thing i can think of is removing/disable blockswap node. not sure if you are using gguf, i got stuck at ksampler if i dont not use gguf model+gguf clip together. the only last thing i can think of is update comyfui and custom nodes.
Also to clarify, i did not say this is a simple workflow.
Lannfield I can try that, if it works, and I get good results i would recommend making that it's own separate workflow because the majority of people don't have high end PC's anyways. Not when 5090's still cost over 2,700 USD
I will say though, uploading the workflow file into Grok and telling it to optimize it for use with (insert your hardware specs) and seconds later it will have a JSON for you to use and test.
jb8892 if it doesn’t work then I have no clue. Sorry about that.
@jb8892 Does DefinitleyNotaDogs workflow auto-scale that 4k image down before running it? I know if I use any image that is larger than 1024x1024 on just about everything it will crash from running out of memory in seconds but learned early on to add image down scaling in line before it gets taken in or rescale it myself otherwise that is the image size It's trying to render the whole time until it gets to some node that does any sort of scaling as built-in feature. At least that is how it seemed to me, so I'm curious.
@FemBro yes, his is literally one click and done in a couple minutes. All of my videos are made with that workflow. Maybe not commercial grade quality, BUT,,, good enough to be enjoyed. The images I feed into it are always around 4k, and it downscales them automatically.
I was getting fuzzy, blurry results with the single sampler until I changed the KSampler to Euler and Beta. I'm not sure what was causing that. It wasn't any LoRAs.
Great info ! One person had told me the samething, i ran the same setup and prompt as his but his was blurry. so guess this could be a solution. thanks for finding this out.
hey guys!!! look at this https://blog.comfy.org/p/wan22-day-0-support-in-comfyui WAN2.2 JUST WAS RELEASED now what?
I've gotten some pretty exciting results with the WAN 2.2 workflow i was able to cobble together. Definitely hoping this workflow sees a 2.2 variant.
sharp3 probably not publishing a wf, passing the torch. too new, too much work, too many qns i may not be able to ans.
i do recommend any Workflow that uses Kijai's WanWrapper, so you dont need to duplicate 2 copies of lora stack/loaders for the 2 high/low models connections in the future when more compatible loras comes out .
Lannfield do you think we can convert your existing wan 2.1 workflow to wan 2.2? just add some support to high low noise samplers and leave out the rest as is?
because your workflow is very very good and a lot of features and speed and quality ... i just can't switch to wan 2.2 as long as your wf exists 😂
vAnN47 yes you can, but it will require a bit of work and i hope its not too complicated to do it. i will write out the instructions and posted here later.
Lannfield That is very much appreciated, thanks Lannfield - you rock
vAnN47 or anyone interested to try. I can't guarantee quality here.
Use Dual Sampling (Expanded) workflow:
GGUF models link.
*Hold ctrl+left click and drag to select nodes together
Part 1:
1. In "ModelRoute 3" blue Box (bottom left), copy the top "Shift"(Purple) node and its "Get_Shift"(Green).
2. Paste it at the end of "ModelRoute 2". Connect them between "modelPass"(Grey) and "Set_Model 2"(Blue) (you need to expand them to see the connectors).
*modelpass->Shift(with get_shift)->Set_Model 2.
3. Delete everything in "ModelRoute 3" and its blue box.
Part 2:
1. In the "Loaders" blue box, copy Model Loaders(native or gguf, which ever you're using), Block Swap, modelpass, Set_Model - which should be connected. (you can delete Block swap or disable it (ctrl+b) if you are not using)
2. Paste them some where that doesn't overlap any boxes.
3. You can set up your low noise model with this copy while the original set it to high noise model.
*note that you will need to input the same number of blocks in both BlockSwap nodes.
Part 3:
1. Copy everything in "ModelRoute2" and paste is somewhere that doesnt overlap any boxes. (make sure you copy everything)
2. Expand the "Get_Model" on the top of this copy and select "Model_0".
3. You can delete the "Get_Clip" node in this copy.
4. You might have to manually enable torch compile in this copy if your are using it. Select all 3 torch compile node(black) in this copy then "ctrl+b" to enable if its in purple.
5. Remember to set the Sage Attention in both of this set up if you are using them.
Part 4:
1. Go to the "Dual Sampling" box (top, middle ).
2. Expand the "Get_Model3Caus"(Blue) node which is on the top left of the 1st kSampler and select "Model 2".
3. Expand the "Get_Model3SF"(Blue) node which is on the top left of the 2nd kSampler and select "Model 2_0".
Remember to set the "Start CFG" to 1 in the "Shift/CFG ..." blue box (right of "Input Image") if you are using the Self-Forcing(Lightx2v) lora.
Set Start Step 3 and End Step 3 as a default, which equates to 6 total steps.
You can change the "sampler_name" in the 1st KSampler to "lcm" or any other.
Without Self-Forcing(Lightx2v) lora(Dev default): CFG 3.5 for Start CFG & End CFG. 10 start steps/ 10 end steps. sampler euler/simple for both Ksamplers.
You can delete "CausVid Strength" node and its "Set_CausVidStr" in the "Shift/CFG ..." blue box (right of "Input Image").
Clip vision is not needed for wan2.2. While its connected, it also affect nothing.
If you want to get rid of them:
In the first model loader box, delete "Load Clip Vision" and its "Set_Clip V"
In the "Middle Setup Segment" blue box(top, left), delete "Get_Clip_V" & "Clip Vision Encode" (Sunflower) that is connected to "WanImageToVideo".
Lastly You will need to figure out start steps/end steps, speed lora, samplers & scheduler on you own as im still experimenting and unable to advise at this moment.
And 1 more thing to note. Wan2.2 seems to generate in default 24fps sometimes. Making it look slow. To counteract this, you can set speed to 1.5 which is exactly 24fps but it will reduce the video length. 7.5 sec in 16fps is approximately 5sec in 24fps(1.5 speed)
and default the shift to 6, range 4-8 still. forgot about this.
Lannfield Thank you so much for the detailed instructions! they were very clear. only one issue the toggle button broke because of duplication but i think i'll figure how to set the toggle to both of the copy of the nodes :) soon i'll upload some generations from your workflow! also noted that you put a combiner videos + upscale (so you can create batch on the main workflow ) you're truly know your craft man. thank you so much. also i had best luck and good output with lcm+simple, tried dppm_2m+sgm_uniform but got weird nipples lol. anyway if you can make it official what you wrote here in the comments would be very appreciated because it will track ALOT of users in the long run. this is really different level, even the raw output look so smooth, 0 eyes artifacts, 0 gray noise at the first frames... this is different league man... thank you again hope to see more content from you! :)
vAnN47 you're welcome. no i wont be publishing more content related to wf. its hindering my creative outputs. you can also try 4/4 steps, there are some differences vs 3/3 steps. lcm/simple work well for me too. i still experiencing bad eyes when motion are too fast and also the video seem to be trying to revert back to original input image like a loop. you can also try to set lightx2v's lora strength 2-3 in high noise model for more motion with other lora. there is also 3 start steps 7 end steps, 3.5 CFG in the 1st Sampler, no lightx2v's lora in the high noise lora stack while low noise lightx2v's lora str at 1. this one gives a lot of more motion. but the best is to wait for lightx2v team to make the 2.2 version.
Lannfield You wrote such a detailed instruction, please, if you have time, make a "beta" version, you have the best workflow I've seen 😻
sergeynikonov943 hey thx, its not that i dont have the time but its very time consuming as things moves very fast and i have to do constant testing and updating. it sort of limit my time to explore more things on my own. if anyone wants to make and publish that workflow i would not mind either, even without crediting me :). but for now, im sorry i wont be making one.
i love your dual sampler wf, just wondering, what is it doing differently than single sampler- also, are you planning on releasing a version with 2.2?
dual wf was used during a time where there wasn't a proper self-forcing accelerated lora for i2v. the dual wf was used to create more motion where the single could not produce properly with that lora. with the new self-forcing lora, you dont really need a dual sampler, but the dual wf still remains functional. just that is slightly complex to use.
unfortunately, i wont be publishing a 2.2 wf, passing the torch. too new, too much work, too many qns i may not be able to ans.
Lannfield This saddens me! As a new comfy user, you have by far the best workflow for WAN! :(
Lannfield thanks for the response! sorry to hear you wont be tackling 2.2, maybe youll change your mind in the future when some time has passed, your workflow is the best ive seen so far for i2v wan
Hey great workflow, thanks a lot!
I do have a big issue though, there seems to be a memory leak somewhere in SYSRAM (NOT VRAM). I have 64GB and I see after each generation it gets more and more full, up to the point of crashing the system (ubuntu 24 in WSL2, WIN11).
I think it is linked with something in post processing, maybe RIFE? Not sure, I have not found the source of the problem.
One way to handle that is to use the "flush everything" feature in comfyui (unload models/node cache from SYSRAM), but it's not a good solution because you have to manually handle it and babysit the generations.
Any ideas?
you can try download the Comfyui-Memory_Cleanup in the custom node manager, use the node "🎈RAM-Cleanup" and connect it to the top "🗑️Clean VRAM" output. (🗑️Clean VRAM - output ---> anything -🎈RAM-Cleanup). You dont have to connect anything for the 🎈RAM-Cleanup's output connection. see if it helps.
Lannfield thanks it could do the trick indeed even if not ideal. I thought it could be the previews or something but seems it's not. I'm going to try to find the culprit and use your suggestion as a temporary fix.
By the way on an unrelated note, how would you handle an automatic sequencing of last frames as inputs, say N times in a row (i2v -> last frame -> i2v etc, making a Nx7s sequenced outputs)? I'm new to Comfy but I'm a dev so I will surely find a way but if you have any hint I'll take them!
SketchyTone yes you can, within the Easy-Use custom node, there are 2 nodes that allows you to do that, "For Loop Start" & "For Loop End", and then merge the videos on every loop within the same workflow. You might want to check out some workflow using For Loop to extend the video to see how it is connected, there are quite a few workflow here in CivitAi. You can also use the "index switch" node to switch different prompts during each loop. More advance, you can also use "index switch" to switch between different loras during each loop.
personally i dont really like doing this because i prefer to choose the outcome of individual segment.
I hope my workflow lay the groundwork for you to understand, as you are a Dev, i think you will be having a lot of fun making your own. If you find some nodes with very little to no documentation, i always ask GeminiAI (tho not very accurate at times lol), or you can use others like Grok or ChatGPT.
finally found someone has the same problem as mine, looks like it's not about this wf, I tried using other wf too, if I connect upscale+interpolate together, will stack system memory usage without release untill get oom, use ram clean mite temporary showing ram fallback but after few runs will still get oom, this only happend on my rtx5XXX system, on my rtx4xxx system it won't happen, I think because of different cuda/python version, for me temporary solution is only upscale video, and do the interpolate separately.
jackietop515100 if you find a perfect solution let me know, by the way my gpu is 3090 RTX. and indeed this issue has been happening with other workflows too, and even outside comfyui (wangp). Intel cpu if that matters
SketchyTone what is your Pytorch Version? i think it's about "python garbage not collecting thing"
jackietop515100 Total VRAM 24576 MB, total RAM 60270 MB
pytorch version: 2.7.1+cu128
xformers version: 0.0.31.post1
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using xformers attention (sage attention 2.2 afterward)
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0]
ComfyUI version: 0.3.49
ComfyUI frontend version: 1.24.4
@jackietop515100 I'm mostly upvoting because this is the first time seeing oom not be for mana, and it made me laugh =P.
@SketchyTone use my workflow instead if you have chance to try it out. I still having issue with that Interpolate node, but this one added another pass between upscale and interpolate. atleast I don't need to babysit it untill oom.
@jackietop515100 thx will take a look. I found a safe fix too: add 128GB of SWAP file (on WSL2 I previously had 0GB swap so it would crash instantly). With the swap file, sometimes it will get slow, but never crashing. ALSO: I noticed there is less issues if you have a continuous task list. Problems arise when you stop completely, then start a new run.
Yay, I figured it out and can finally say I made something that looks good that I didn't have to pay some site to make for me. Thank You. I went to give you all my buzz, but I only had 2 apparently... I thought the 615 in the corner meant something. That must be pyrite coloring, so I only have fools buzz.
Totally save my life because My PC are RTX4060 8GB-GPU and 16GB-VRAM, can't use wan2.2 at all.
I'm new to wan, I read everything I understood 0, what this is exactly doing.
Just on top of the page write something like
This workflow makes prompt adhere better, or better quality, faster generation etc.
You made a technical explanation of everything and you forgot the most important thing.
I'M GOING TO TEST this my very interesting look
Hi Lannfield just a big thank you for your sharing Interpolation_Upscaler which works super well I tried other workflows without success again good job you are part of my favorites:)
I'm new to comfyUI and i genuinely have no idea how to do any of this, well i know how to install and download stuff, where to put them. but when it comes to nodes i genuinely have 0 idea. if someone knows a good guide out there, a video guide or documentation. i'd be very happy