CivArchive
    Preview 96006407

    Daxamur's Wan 2.2 Workflows

    If you'd like to support me, check out my Patreon!
    DM to inquire about custom projects.


    -NEWS-

    Responses are delayed as I'm heads down working on getting my next release ready for you all - once released, responses will go back to normal!

    v1.2.1 Out Now! - Update to DaxNodes via ComfyUI manager required

    • FLF2V added with GGUF support - no new models required

    • Fixed ability to independently disabled / enable upscaling and interpolation

    • Dedicated resolution picker nodes, added auto-resizing functionality from v1.3.1 to I2V and FLF2V


    DaxNodes now available via ComfyUI Manager, no more git clone required!


    Current Tracked Bugs:

    • KJNodes Get / Set reporting a missing error for some users, if this happens - ensure you download the latest version of DaxNodes from ComfyUI manager, and re-import the workflow! - In progress


    If you see a "FileNotFoundError ([WinError 2] The system cannot find the file specified.)" from VideoSave or other video-related nodes, FFmpeg is missing or not in your system PATH.

    • Setup (Full Version Required):

    • Download the full FFmpeg build

    • Extract it to a stable location (e.g., C:\ffmpeg).

    • Add C:\ffmpeg\bin to your system PATH:

    • Open Edit the system environment variables -> Environment Variables....

    • Under System variables, select Path -> Edit....

    • Click New and add C:\ffmpeg\bin.

    • Save and exit.

    Restart ComfyUI (and your terminal/command prompt).

    After this, everything should work!


    v1.3.1 Features

    Segment-Based Prompting

    • Persistent Positive Prompt: Keeps consistent details across the entire video (ie. “A woman with green eyes and brown hair in her warmly lit bedroom”).

    • Segment Positive Prompts: Separated with +, one per segment length (ie. “She is writing in a journal + She closes the journal and stands up + She walks away”).

    • Gives you far more control in long-form videos and helps reduce WAN’s tendency to render weird camera movements or jutters on I2V start.

    Endless-Style Looping

    • Segments can chain "infinitely" (I capped the node at 9999), creating effectively endless loops.

    • The Video Execution ID manages overwrites and stitching - just increment the ID as you generate new sequences.

    Streaming RIFE VFI + Upscaling

    • Tweaked RIFE VFI and upscaling now stream frames instead of holding entire sequences in VRAM/RAM.

    • Allows much longer videos, smoother interpolation, and sharper upscales without OOM errors.

    Face Detection & Drift Correction

    • Intelligent Mediapipe face frame detection locks focus on characters.

    • Drift correction ensures the final video runs at least as long as requested - but instead of cutting mid-generation, it will add full extra segments until the target framecount is met or exceeded.

    • This way, no generated frames are wasted, and you always end up with smooth, complete segments.

    • Fully toggleable, with adjustable frame look-back settings.

    Resolution Handling

    • T2V: Standard WAN resolution presets with optional overrides.

    • I2V: Input image scales to WAN-native resolutions, preserving aspect ratio. “Native” passthrough supported.

    QoL & Management

    • Toggle upscaling/interpolation independently.

    • Temp file output organized by execution ID - clear /output/.tmp/ periodically to save space.

    Looking Ahead

    This workflow is still experimental , future versions will expand on segment control, smarter handling of motion/camera behavior, more adaptive face tracking, and even integration of audio/video for cinematic sequences. Big things are coming!


    Notes

    I've done my best to place most nodes that you'd want to configure at the lower portion of the flow (roughly) sequentially, while most of the operational / backend stuff sits at the top. Nodes have been labeled according to their function as clearly as possible.

    Beyond that;

    • NAG Attention is in use, so it is recommended to leave the CFG set to 1.

    • The sampler and scheduler are set to uni_pc // simple by default as I find this is the best balance of speed and quality. (1.1> Only) If you don't mind waiting (a lot, in my experience) longer for some slightly better results, then I'd recommend res_3s // bong_tangent from the RES4LYF custom node.

    • I have set the default number of steps to 8 (4 steps per sampler) as opposed to 4, as here is where I see the most significant quality / time tradeoff - but this is really up to your preference.

    • This flow will save finished videos to ComfyUI/output/WAN/<T2V|T2I|I2V>/ by default.

    I2V

    • The custom node flow2-wan-video will cause a conflict with the Wan image to video node and must be removed to work. I have found that this node does not get completely removed from the custom_nodes folder when removing via the ComfyUI manager, so this must be deleted manually.

    GGUF

    • All models used with the GGUF versions of the flows are the same with the exception of the base high and low noise model. You will need to determine which GGUF quant best fits your system, and then set the correct model in each respective Load WAN 2.2 GGUF node accordingly. As a rule of thumb, ideally your GGUF model should fit within your VRAM with a few GB to spare.

    • The examples for the GGUF flows were created using the Q6_K quant of WAN 2.2 I2V and T2V.

    • The WAN 2.2 GGUF quants tested with this flow come from the following locations on huggingface;

    MMAUDIO

    • To set up MMAUDIO, you must download the MMAUDIO models below, create an "mmaudio" folder in your models directory (ComfyUI/models/mmaudio), and place every mmaudio model downloaded into this folder (even apple_DFN5B-CLIP-ViT-H-14-384_fp16.safetensors).

    Block Swap Flows

    • Being discontinued as I have found that the native ComfyUI memory swapping conserves more memory and slows down the process less in my testing. If you receive OOM with the base v1.2 flows, I'd recommend trying out the GGUF versions!

    Triton and SageAttention Issues

    • The most frequent issues I see users encounter are related to the installation of Triton and SageAttention - and while I'm happy to help out as much as I can, I am but one man and can't always get to everyone in a reasonable time. Luckily, @CRAZYAI4U has pointed me to Stability Matrix which can auto-deploy ComfyUI and has a dedicated script for installing Triton and SageAttention.

    • You will first need to download Stability Matrix from their repository, and download ComfyUI via their hub. Once ComfyUI has been deployed via the hub, click the three horizontal dots to the top left of the ComfyUI instance's entry, select "Package Commands" and then "Install Triton and SageAttention". Once complete, you should be able to import the flow, install any missing dependencies via ComfyUI manager, drop in your models and start generating!

    • Will spin up a dedicated article with screenshots on this soon.

    Models Used

    T2V (Text to Video)

    I2V (Image to Video)

    MMAUDIO

    Non-Native Custom_Nodes Used

    Description

    • Major Update - Infinite Generation, Smart Best Frame Detection, etc...

      • Fixed metadata saving

      • Fixed duplicate segment saving

    FAQ

    Comments (120)

    ResistAiAug 23, 2025· 1 reaction
    CivitAI

    HOLYCRAP! This is amazing Daxamur

    ResistAiAug 23, 2025· 1 reaction

    This workflow is so well-optimised that I could use Photoshop while creating a video!

    CRAZYAI4UAug 23, 2025· 1 reaction

    @ResistAi I agree it's not lagging my whole system anymore like in 1.2. Give this guy @Daxamur a medal

    Daxamur
    Author
    Aug 23, 2025· 1 reaction

    I'm really glad to hear you both like it!

    ZepesAug 23, 2025· 1 reaction
    CivitAI

    I2V v1.3 GGUF, works fine but it little slow,
    make 15sec video 720x1280 took 36min. (4090)

    I notice two problems, when open workflow ComfyUI gives warning:

    "KJ Get/Set node input undefined.. mostly likely you're missing custom nodes"

    After that i tested I2V v1.3 workflow, and change video total length to 5sec. (segment_length=5)
    Oops, output video length is 15sec!!
    after 5sec video is messed, in there is pieces from first run in end.

    NyaganoAug 23, 2025· 1 reaction

    I also get the message "KJ Get/Set node input undefined.. mostly likely you're missing custom nodes", but for some reason it runs.

    ResistAiAug 23, 2025· 1 reaction

    Errors are likely due to the nodes not being added to the ComfyUI manager yet

    Daxamur
    Author
    Aug 23, 2025· 2 reactions

    This can also occur if you happen to have custom nodes that conflict with the set and get nodes, happy to take a look if you're able to send me a list of what you've got installed!

    ResistAiAug 23, 2025· 1 reaction

    @Daxamur Like it.. I love it, its an optimised beast of a wf. Well done and thanks again!

    Daxamur
    Author
    Aug 24, 2025· 1 reaction

    @ResistAi Thank you as well, and anytime! Can't wait to see what everyone makes

    ResistAiAug 24, 2025

    @Daxamur Do the vids save with Civitai metadata for uploading pal?

    Daxamur
    Author
    Aug 24, 2025

    @ResistAi They should be - I did throw that in to the custom node quickly though so I may have messed it up, I'll check!

    joyy114Aug 23, 2025· 4 reactions
    CivitAI

    If you're getting a FileNotFoundError (like [WinError 2] The system cannot find the file specified.) from a VideoSave or other video-related node, it's most likely because FFmpeg is not installed or the system can't find it.

    Here’s a quick guide to fix it:

    Download FFmpeg: Get the latest version from the official website. Unzip the file and place it in a stable location, like C:\ffmpeg.

    Add FFmpeg to your system's Path variable:

    Open "Edit the system environment variables" from the Windows search bar.

    Click on "Environment Variables...".

    In the System variables section, find and select the existing Path variable.

    Click "Edit...".

    Click "New" and add the path to your FFmpeg bin folder (e.g., C:\ffmpeg\bin).

    Click "OK" to close all windows.

    Restart ComfyUI: It's crucial to completely close and re-open ComfyUI (and the command prompt/terminal you're using) so the changes take effect.

    Once you've done this, the video nodes should work perfectly. Hope this helps!

    CRAZYAI4UAug 23, 2025· 2 reactions

    FYI: you need the full version (https://www.gyan.dev/ffmpeg/builds/ffmpeg-git-full.7z) and I had to restart windows too to take effect.

    Daxamur
    Author
    Aug 23, 2025

    Thanks for the clarification both, I'll update the documentation to be more clear!

    Daxamur
    Author
    Aug 23, 2025· 1 reaction

    Updated the model info on the post accordingly, I'll also work on adding a better directed error message for that scenario - thanks again!

    cgsthrasher726Aug 23, 2025· 1 reaction
    CivitAI

    with t2v 1.3 I get a reconnecting error when it gets to KSampler Low which is a shame, I really love your work.

    Daxamur
    Author
    Aug 24, 2025

    Yeah, v1.3 had a few issues that slipped by me initially, but are now resolved - just let me know if you still have any issues with the latest version and I'd be happy to take a look!

    Mastermike8800Aug 24, 2025· 2 reactions
    CivitAI

    Every time I run the work flow it creates the first segment and then stops no matter how long I set the length to. Every video is 5 seconds long. What am I doing wrong?

    Daxamur
    Author
    Aug 24, 2025· 1 reaction

    This was actually a bug that got by me before upload due to some last minute changes, it has been resolved in the latest upload - just let me know if you continue to have any issues!

    Mastermike8800Aug 24, 2025· 1 reaction

    @Daxamur Lol now I am having the opposite issue. The workflow ran for five cycles with no end in sight when I had the segment length set to 5 and the total to 15. That should be 3 cycles right? I tried to make as few adjustments as possible to your workflow, just a couple loras and setting the correct path to the models as well as the git pull on your new nodes.

    Daxamur
    Author
    Aug 24, 2025

    @Mastermike8800 That's right, it should only execute for 3 cycles there! My first question would be, do you have the Intelligent Face Detection & Drift Correction toggled on?

    Mastermike8800Aug 24, 2025· 1 reaction

    @Daxamur yes I do

    Daxamur
    Author
    Aug 24, 2025· 2 reactions

    @Mastermike8800 Nice, that would probably be why! When that setting is on, it will always take your last segment and go backwards through each frame until it finds a good face with open eyes to continue the video from. When it finds it, the flow remembers how many frames back it was, and generates an additional segment to compensate for the lost frames until you meet or exceed your target time.

    So, if the best face frame is far back enough, you can create a loop where segments will get generated until you get lucky enough for it to end on a good frame.

    You can mitigate the chance of it looping like that by reducing the number of frames back it can search ("Frames to Check for Face (from Last Frame)") to a lower number, or disabling the setting entirely depending on your preference!

    Mastermike8800Aug 24, 2025· 1 reaction

    @Daxamur Alright I'll give that a try!

    rivverswimmer2572Aug 25, 2025

    I am using your I2V latest version and still getting the same issue with the workflow stopping after the first segment.

    favas644Sep 18, 2025· 1 reaction

    @rivverswimmer2572 check the terminal, in my case it's caused by mediapipe not found. Mediapipe won't install along with daxnodes as it's supposed to, due to incompatibility with embedded Python >3.13.

    Symbiot78Aug 24, 2025· 1 reaction
    CivitAI

    looking good. looking real good.
    But ... am i missing where you put the image upload node? I cannot seem to spot it..

    Symbiot78Aug 24, 2025· 1 reaction

    oh.. I see there are 2 workflows to download. Sorry..

    Daxamur
    Author
    Aug 24, 2025

    No worries, just let me know if you have any issues!

    dirtysemAug 24, 2025· 1 reaction
    CivitAI

    Unfortunately, it makes the first segment and stops. I also noticed this https://iimg.su/i/0RnHK0

    I have long wanted to get a workflow with a script, namely text in a video. Please help me figure it out.

    Daxamur
    Author
    Aug 24, 2025· 1 reaction

    It should be fixed now!

    favas644Sep 18, 2025

    Did you solve? I have similar issue and it turns out to be due to failed mediapipe installation. Mediapipe won't install with Python >3.12 which in embedded into Comfy virtual environment.

    Symbiot78Aug 24, 2025· 1 reaction
    CivitAI

    So I've now tested the I2V as well. It works perfectly for me ( to my knowledge. at least )
    I had to think to understand the persistent / segment positive prompting. I didn't understand it from the description.
    You've made a very fine WF.
    I wish it would run faster - but I am guessing that you've optimized it. I've disabled interpolation and upscaling while testing.

    Symbiot78Aug 24, 2025· 1 reaction

    lol. Not sure how I did it.. but before leaving the house I change my prompt. I came back to a 1:07 video. I can say.. the drifting is quiiiiiiiiiiiite extravagant :-) :-)
    Funny enough.. I changed the prompt a bit and it ran normally.
    No extra segments or anything else in the prompt that should cause this.

    Daxamur
    Author
    Aug 24, 2025

    @Symbiot78 Yeah haha, until we get something like VACE 2.2 it's still kind of a lottery on the I2V portion - I would say just make sure you have the latest version as it has some continuation bug fixes!

    iPiKoAug 24, 2025· 3 reactions
    CivitAI

    Hey, when I open the workflow it tell me that :
    KJ Get/Set
    node input undefined.. Most likely you're missing custom nodes
    This mean that there are many broken get and set nodes.

    Daxamur
    Author
    Aug 24, 2025

    Thanks for the report! Yeah it seems like a few people are getting this message - I've yet to isolate which missing node or custom node conflict it is coming from. Could you provide me your custom node list?

    iPiKoAug 24, 2025· 1 reaction

    @Daxamur I have disabled all the nodes and only used the nodes In the workflow, at first I tought it is that the issue but it is not.

    Daxamur
    Author
    Aug 24, 2025

    @iPiKo Got it, did you disable or fully uninstall - there are some instances where even having them in the custom nodes folder at all can cause conflicts (you can temporarily move every custom node folder out to test). Otherwise, I would make sure your ComfyUI is fully up to date, and that you run a "git pull" from "CuomfyUI/custom_nodes/DaxNodes" to ensure you have the latest version of the WF's nodes!

    rapidphase88622Aug 25, 2025· 1 reaction

    @Daxamur I just deleted my entire custom nodes folder and reinstalled just nodes for this WF and pulled the DaxNodes folder and I am still getting KJ Get/Set
    node input undefined.. Most likely you're missing custom nodes.

    Any other Diag I can do to help you pinpoint this let me know, thanks.

    Daxamur
    Author
    Aug 25, 2025

    @rapidphase88622 Absolutely, when this occurs do you see it complain about any specific node names or IDs in the terminal?

    rapidphase88622Aug 25, 2025

    @Daxamur

    Failed to validate prompt for output 1490:

    * String to Float 139:

    - Required input is missing: String

    Output will be ignored

    Failed to validate prompt for output 1454:

    Output will be ignored

    Failed to validate prompt for output 1322:

    Output will be ignored

    Failed to validate prompt for output 1637:

    Output will be ignored

    Failed to validate prompt for output 1500:

    Output will be ignored

    "Upscaled Video" is 1490, I have the prompts filled in with a few in segment positive separated by +

    iPiKoAug 27, 2025

    @Daxamur I told claude AI to write me a script that detect orphane setnode and getnode after deleting those nodes reduced the alert messages from 7 to 5 and this was the result for Daxamur_WAN22_T2V_131.json :
    ORPHAN NODES REPORT

    ==================================================

    ORPHAN SETNODES (Variables set but never retrieved):

    --------------------------------------------------

    Variable: CORRECTED_LASTFRAMES_SW

    Node ID: 1548, Title: Set_CORRECTED_LASTFRAMES_SW, Position: [7570, -3910]

    Variable: FINAL_BASE_VIDEO

    Node ID: 1606, Title: Set_FINAL_BASE_VIDEO, Position: [7570, -3130]

    Variable: FLOW_ADJUSTED_TRIM

    Node ID: 1205, Title: Set_FLOW_ADJUSTED_TRIM, Position: [2910, -3590]

    Variable: FLOW_PACKED_FILES

    Node ID: 582, Title: Set_FLOW_PACKED_FILES, Position: [2910, -3710]

    Variable: INDEX_START

    Node ID: 248, Title: Set_INDEX_START, Position: [1920, -3720]

    Variable: SEG_ADJUSTED_TRIM

    Node ID: 1203, Title: Set_SEG_ADJUSTED_TRIM, Position: [7230, -3630]

    Variable: SEG_INUSE_EXECID

    Node ID: 1313, Title: Set_SEG_INUSE_EXECID, Position: [7230, -3550]

    ORPHAN GETNODES (Variables retrieved but never set):

    --------------------------------------------------

    None found.

    I hope this help I can give you the script of detection if you need.
    In case it is working 100% for you maybe try fresh comfyui install.

    Daxamur
    Author
    Aug 27, 2025

    @iPiKo They are connected for me at least, granted - it may look like that to Comfy because of the looping, the nodes that get executed inside of a loop can sometimes detect they don't have inputs / outputs since Comfy takes in to account whether it thinks they get executed! Can't necessarily speak to the script on that one though

    iPiKoAug 27, 2025

    @Daxamur the script is wroking because manula search for example:
    Get_FINAL_BASE_VIDEO will show nothing no such node exist in the workflow.

    iPiKoAug 27, 2025

    @Daxamur https://imgur.com/a/etPDuiw
    also this is a list of the nodes activated in my comfyui:

    Import times for custom nodes:

    0.0 seconds: C:\local-ai\comfyui\custom_nodes\websocket_image_save.py

    0.0 seconds: C:\local-ai\comfyui\custom_nodes\comfyui-unsafe-torch

    0.0 seconds: C:\local-ai\comfyui\custom_nodes\ComfyLiterals

    0.0 seconds: C:\local-ai\comfyui\custom_nodes\cg-image-filter

    0.0 seconds: C:\local-ai\comfyui\custom_nodes\comfyui-custom-scripts

    0.0 seconds: C:\local-ai\comfyui\custom_nodes\comfyui-kjnodes

    0.0 seconds: C:\local-ai\comfyui\custom_nodes\comfyui_essentials

    0.1 seconds: C:\local-ai\comfyui\custom_nodes\ComfyUI-Impact-Pack

    0.1 seconds: C:\local-ai\comfyui\custom_nodes\RES4LYF

    0.2 seconds: C:\local-ai\comfyui\custom_nodes\rgthree-comfy

    0.4 seconds: C:\local-ai\comfyui\custom_nodes\ComfyUI-Manager

    0.7 seconds: C:\local-ai\comfyui\custom_nodes\DaxNodes

    1.9 seconds: C:\local-ai\comfyui\custom_nodes\ComfyUI-Easy-Use

    Daxamur
    Author
    Aug 27, 2025

    @iPiKo https://imgur.com/a/nMew86B Here are some screenshots of a couple of the nodes it's reporting being connected - I think there may be some sort of flaw in it's logic

    Daxamur
    Author
    Aug 27, 2025· 1 reaction

    @iPiKo Ahh I misunderstood! I see what you mean now - let me take a look, could just be version control and accidentally leaving stuff in

    GuinquisitionAug 24, 2025· 1 reaction
    CivitAI

    Where do I get the TextConcatenate node? Manager isn't auto grabbing it, so I tried grabbing Nikosis and EasyAF but I guess they don't match up. And if they do, then I don't know how to hook them up to the right spot.

    Daxamur
    Author
    Aug 24, 2025· 1 reaction

    The TextConcatenate node I use is from the RES4LYF custom nodes pack!

    cleonthethirdAug 25, 2025· 1 reaction

    I cloned RES4LYF and am still getting a missing node for TextConcatenate.. any suggestions? Deleted and re-cloned to be sure

    Daxamur
    Author
    Aug 25, 2025· 1 reaction

    @cleonthethird Hmmm, in that case it sounds to me like there may be a conflicting node - could you provide your custom node list?

    cleonthethirdAug 25, 2025

    @Daxamur cg-image-filter, ComfyLiterals, comfyui_essentials, ComfyUI-Custom-Scripts, comfyui-ease-use, ComfyUI-Florence2, comfyui-frame-interpolation, comfyui-impact-pack, comfyui-int-and-float, comfyui-kjnodes, ComfyUI-Manager, comfyui-videohelpersuite, DaxNodes, RES4LYF, rgthree-comfy

    meowmeow12345Aug 25, 2025· 1 reaction
    CivitAI

    Hmm...very weird no matter what it seems to want to process 6 segments. So a 3 "+" ends up being 30s. I'll dig around, I did have the kj nodes error not sure if that's fixed completely or not. But I did try multiple variations and it always read 6 segments processing/found ** think **

    Daxamur
    Author
    Aug 25, 2025

    Hmmm, is this on the latest version of the flows (v1.3.1)? + Do you have the latest version of DaxNodes from ComfyUI manager?

    meowmeow12345Aug 25, 2025· 2 reactions

    Yea I upgraded everything that was available to upgrade and using the latest 1.3.1.

    I even fixed the KJ nodes error which I thought must be the culprit. The other weird thing is even if I turn off upscale/interpolation, it still runs those.

    Perhaps there's some secret node or something that's not showing up as a requirement? I upgraded everything I could think of in manager, and terminal, updated all nodes. I guess the next step is to try disabling other nodes and see if that helps or maybe try the 1.3.0 version🤔

    meowmeow12345Aug 26, 2025· 1 reaction

    @Daxamur It's so weird...it's pulling from videos I made yesterday:
    Saving segment: Exec=422856580461754, Loop=2
    Segment saved: 422856580461754_00002.mp4 (1.49MB)
    Combining segments for execution 422856580461754
    Found 6 segments to combine
    WARNING: Resolution mismatch! Expected 784x1168
    WARNING: Resolution mismatch! Expected 784x1168
    WARNING: Resolution mismatch! Expected 784x1168

    It should be 3 segments, but grabs vids from yesterday and merges them with the current one. I even went into the I2V output folder and cleaned everything before starting just in case it was like...a naming issue and grabbing stuff from the base directory.

    Anyway, look forward to a 1.3.2 or 1.4 : )
    It's still awesome tho. If you need anything from like some logs or something let me know.

    Daxamur
    Author
    Aug 26, 2025

    @meowmeow12345 Just to triple check, are you changing the video execution ID in-between generations? The actual segments themselves are saved under output/.tmp/<execution_id> for combining, and if the execution ID already exists in that folder, it will combine all clips present. I think this is probably what is happening as it will throw this error if the clips in the directory don't have matching resolutions!

    meowmeow12345Aug 27, 2025

    @Daxamur I did not change them o.o;;

    So it seems to work now. I was gonna joke that it was too much reading but...I read the whole thing😅

    Maybe since it said changing was "optional" so I didn't pay much attention to that. Is the idea that you don't change it, and keep generating 10s clips so they stitch with less degradation?

    It a lot to keep track of, between the video id and stuff. I wish there was a way to count the total segments tho that's the hardest part keeping track of how many +'s you have XD

    Well I'll keep playing with this now ty

    videoggapp835Aug 25, 2025· 1 reaction
    CivitAI

    Im getting wierd 20 mins per generation (5 sec video) on 4090... What is ok gentime for others?

    meowmeow12345Aug 25, 2025· 1 reaction

    Also on 4090, 1024px atm just happened to be testing:
    high: 26.59s/it low: 14.45s/it
    So about 3 minutes per 5s...granted I have other problems XD

    videoggapp835Aug 25, 2025· 1 reaction

    @meowmeow12345 did u changed any settings to get this stats?

    mrazvanalexAug 25, 2025

    3090ti I'm hitting the hour mark... the generation is not done yet :( still waiting for upscale+interpolate

    Daxamur
    Author
    Aug 25, 2025

    @videoggapp835 How much RAM do you have? It can take longer if the model has to be fully unloaded as opposed to offloaded to RAM - additionally, make sure you've disabled the "Toggle Intelligent Face Detection & Drift Correction", or lower the "Frames to Check for Face (from Last Frame)" to ensure you're not getting into a correction loop.

    Daxamur
    Author
    Aug 25, 2025

    @mrazvanalex What size / resolution of video are you trying to generate - and do you have "Toggle Intelligent Face Detection & Drift Correction" on?

    meowmeow12345Aug 25, 2025

    @videoggapp835 I lowered the resolution to 1024 from the original but I can check the full speed later. Probably most important thing is make sure sage attention is installed correctly. It was a huge pain in the ass, cause all the apps n shit need to match like the 🪐 align once in 1000 years😂

    I ended up having to compile everything from source with the help of chatgpt, nothing I downloaded like "prebuilt" worked for me

    meowmeow12345Aug 25, 2025

    @videoggapp835 I lowered the resolution to 1024 from the original but I can check the full speed later. Probably most important thing is make sure sage attention is installed correctly. It was a huge pain in the ass, cause all the apps n shit need to match like the 🪐 align once in 1000 years😂

    I ended up having to compile everything from source with the help of chatgpt, nothing I downloaded like "prebuilt" worked for me

    mrazvanalexAug 25, 2025

    @Daxamur I need to check tomorrow. I'm testing some other models now. But I'll get back to your WF when I'll switch back to video gen. I don't think it was too high of a resolution but I'll look better at all the configs next time.

    The wf is getting quite big now on v1.3 so it's getting harder to follow some stuff in my opinion, but I'm confident they're great and I just need to get the hang of it.

    Daxamur
    Author
    Aug 25, 2025· 1 reaction

    @meowmeow12345 I'm glad you ended up getting it to work! Sageattention is definitely a problem for a lot of people, so you're not alone haha

    Daxamur
    Author
    Aug 25, 2025· 1 reaction

    @mrazvanalex No problem! + Yeah, it's not just you - achieving truly infinite generation in ComfyUI was basically near impossible without custom node code, and even then I just don't know that it was built to handle distribution of flows this large. I am working on something else though that should solve all of this! 

    mrazvanalexAug 25, 2025

    @Daxamur Maybe I'm just gonna wait then, haha!

    videoggapp835Aug 26, 2025

    fow two days I was reinstalling comfy ui and burned out everything including torch and cuda, now it works... I hate comfy

    next1Aug 25, 2025· 1 reaction
    CivitAI

    Some Nodes Are Missing

    When loading the graph, the following node types were not found

    TextConcatenate

    ImpactSwitch

    安装所有缺失节点

    Open Manager

    Daxamur
    Author
    Aug 25, 2025

    Hi, is this after attempting to install any missing custom nodes via the ComfyUI manager?

    next1Aug 26, 2025

    @Daxamur Yes, even after I installed impact and res4style, this missing issue still occurs. Could it be that there’s a conflict among my nodes? All the plugins are installed in their latest versions.

    next1Aug 26, 2025

    @Daxamur I mean that version 1.2 runs fine, but with version 1.3.1 the node missing issue appears. Also, I have one more request: could you please make a first-and-last-frame version? This really is the best video upscaling workflow I’ve ever used.

    next1Aug 26, 2025

    @Daxamur I have one more small question: is it possible to remove i2v or t2v and upload my own video (within 10 seconds) for upscaling? I just happened to think of this.

    commuting183Sep 4, 2025

    @next1 did you find a solution? I have the same issue...

    cleonthethirdAug 26, 2025· 2 reactions
    CivitAI

    Your 1.2 workflows are the best I've tried yet! They've become my go to I2V workflow. The simplicity, organization, and reliability is 10/10. Huge thank you for posting these. I'm looking forward to following your newer workflows when they become more stable!

    Daxamur
    Author
    Aug 26, 2025· 1 reaction

    I appreciate the kind words, and I'm really glad you're enjoying the flow! I'm working on something that I hope everyone will like even more next

    aneebartist683Aug 26, 2025· 5 reactions
    CivitAI

    People who showing off results specially @joyy114 not giving any detail how he achieved that results. I feel sad people here not helping others just throwing things on face. For beginners that's so sad.

    joyy114Aug 29, 2025· 4 reactions

    The video I posted today was made with the workflow provided by Daxamur and my prompt, and I even included the prompt. The only adjustment I made to the workflow was changing the seed to random. For the 1.3 version, I didn't even load a Lora. I've done all this, and you're still calling me a show-off?

    It's been less than two months since I started using ComfyUI to make WAN videos. I get my learning materials from Civitai or from ChatGPT and Grok. When I run into a problem, I go and find a solution, and I share that information with everyone right away. I'm not like you—lazy and just complaining about others.

    I suggest that instead of wasting your free time complaining here, you spend more time learning. Also, if your hardware isn't good enough, I recommend you save up and buy a better machine. It will save you from a lot of potential errors that can happen during execution. After all, in this field, good hardware is key to success. If your hardware is really insufficient, you might spend a long time on a computation only for it to fail at the end, wasting all your time and getting nothing in return.

    As the saying goes, "A craftsman who wishes to do his work well must first sharpen his tools." I hope you focus on improving your own hardware, rather than just running your mouth.

    Light6969Aug 30, 2025

    @joyy114 Hey man good comment, I like your videos.

    Light6969Aug 30, 2025· 1 reaction

    @aneebartist683 you realise you can drag and drop any of Joy's videos into comfyui and it shows the entrie workflow right? lol

    aneebartist683Aug 31, 2025

    @joyy114 Nah you didn't mentioned prompt and settings like cgf, steps etx and actually I really loved your results, and I am trying my best to learn everything that's why I am pushing myself to get better results. and somehow I achieved something which is workable. even my low spec system. and showing off means displaying results. don't mind chill.

    aneebartist683Aug 31, 2025· 1 reaction

    @Light2020462 Ok I will try. but I can't use this workflow it's required high specs pc. 

    aneebartist683Aug 26, 2025
    CivitAI

    People who struggling to get ffmpeg build and scratching their heads there is no bin folder. try to download from gyan dev website and download ffmpeg release full

    chazz1meAug 26, 2025
    CivitAI

    Thank you for this amazing workflow! It produces some really high quality renders - even without the most powerful rig. How hard would it be to add FFLF2V capabilities to this?

    aneebartist683Aug 26, 2025

    Whats you computer specs?

    CRAZYAI4UAug 27, 2025

    I would also love to see FFLF2V <3

    Daxamur
    Author
    Aug 27, 2025

    FLF2V and S2V are both on my list here for sure! I am hoping to tie them into the application I am creating, but if it feels like that will take to long I will do Workflows first!

    chazz1meAug 28, 2025

    @aneebartist683 not great...actually using a VM (Shadow PC w/ Nvidia RTX A4500 w/ 20GB VRAM and the pc only has 28GB RAM.

    aneebartist683Aug 26, 2025
    CivitAI

    which is best for RTX 3060ti I2V and T2v? can anyone tested if yes please guide. I am struggling.
    I have 32gb ram i7 1200k. 8GB Vram.

    aneebartist683Aug 26, 2025
    CivitAI

    Patching comfy attention to use sageattn

    50%|█████████████████████████████████████████ | 1/2 [16:58<16:58, 1018.49s/it]


    What's the point of these Sage and Triton if you getting Image to video with this struggling. They build for 4090+ chards and claimed highest speed lightning flash speed lols this is stupid. I installed this triton and sage with too much struggle to get this. I followed simple workflow from youtube. getting 720x928 I2V 5 seconds video in 5 minutes and these triton and sage claimed they are fastest. This is so weird and BS. lol. and about this workflow I can bet this workflow never tested on low end cards like 30xx series.

    aneebartist683Aug 26, 2025

    Lol :D
    KSamplerAdvanced

    Allocation on device This error means you ran out of memory on your GPU. TIPS: If the workflow worked before you might have accidentally set the batch_size to a large number.

    aneebartist683Aug 26, 2025· 1 reaction

    Attempting to release mmap (401)

    Patching comfy attention to use sageattn

    100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [32:32<00:00, 976.12s/it]

    Restoring initial comfy attention

    Requested to load WAN21

    0 models unloaded.

    loaded partially 128.0 125.5274658203125 0

    Attempting to release mmap (401)

    Patching comfy attention to use sageattn

    0%| | 0/4 [00:00<?, ?it/s]

    Restoring initial comfy attention

    !!! Exception during processing !!! Allocation on device

    Traceback (most recent call last):

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute

    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data

    return_values = await asyncmap_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in asyncmap_node_over_list

    await process_inputs(input_dict, i)

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs

    result = f(**inputs)

    ^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\nodes.py", line 1555, in sample

    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\nodes.py", line 1488, in common_ksampler

    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\sample.py", line 45, in sample

    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample

    return orig_fn(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1161, in sample

    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1051, in sample

    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1036, in sample

    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1004, in outer_sample

    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 987, in inner_sample

    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample

    return orig_fn(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 759, in sample

    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 868, in sample_unipc

    x = uni_pc.sample(noise, timesteps=timesteps, skip_type="time_uniform", method="multistep", order=order, lower_order_final=True, callback=callback, disable_pbar=disable)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 715, in sample

    model_prev_list = [self.model_fn(x, vec_t)]

    ^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 410, in model_fn

    return self.data_prediction_fn(x, t)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 394, in data_prediction_fn

    noise = self.noise_prediction_fn(x, t)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 388, in noise_prediction_fn

    return self.model(x, t)

    ^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 329, in model_fn

    return noise_pred_fn(x, t_continuous)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 297, in noise_pred_fn

    output = model(x, t_input, **model_kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 859, in <lambda>

    lambda input, sigma, kwargs: predict_eps_sigma(model, input, sigma, kwargs),

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 843, in predict_eps_sigma

    return (input - model(input, sigma_in, **kwargs)) / sigma

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 408, in call

    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 960, in call

    return self.outer_predict_noise(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 967, in outer_predict_noise

    ).execute(x, timestep, model_options, seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 970, in predict_noise

    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 388, in sampling_function

    out = calc_cond_batch(model, conds, x, timestep, model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 206, in calc_cond_batch

    return calccond_batch_outer(model, conds, x_in, timestep, model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 214, in calccond_batch_outer

    return executor.execute(model, conds, x_in, timestep, model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 333, in calccond_batch

    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\model_base.py", line 155, in apply_model

    return comfy.patcher_extension.WrapperExecutor.new_class_executor(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\model_base.py", line 194, in applymodel

    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in callimpl

    return forward_call(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 577, in forward

    return comfy.patcher_extension.WrapperExecutor.new_class_executor(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 607, in _forward

    return self.forward_orig(x, timestep, context, clip_fea=clip_fea, freqs=freqs, transformer_options=transformer_options, **kwargs)[:, :, :t, :h, :w]

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 564, in forward_orig

    x = block(x, e=e0, freqs=freqs, context=context, context_img_len=context_img_len)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in callimpl

    return forward_call(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 222, in forward

    y = self.self_attn(

    ^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in callimpl

    return forward_call(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 71, in forward

    q, k = apply_rope(q, k, freqs)

    ^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\flux\math.py", line 42, in apply_rope

    xq_out = freqs_cis[..., 0] xq_[..., 0] + freqs_cis[..., 1] xq_[..., 1]

    ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~

    torch.OutOfMemoryError: Allocation on device

    Got an OOM, unloading all loaded models.

    Prompt executed in 00:33:19

    got prompt

    Requested to load WAN21

    0 models unloaded.

    loaded partially 128.0 125.5274658203125 0

    Patching comfy attention to use sageattn

    0%| | 0/4 [00:00<?, ?it/s]

    Restoring initial comfy attention

    !!! Exception during processing !!! Allocation on device

    Traceback (most recent call last):

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute

    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data

    return_values = await asyncmap_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in asyncmap_node_over_list

    await process_inputs(input_dict, i)

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs

    result = f(**inputs)

    ^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\nodes.py", line 1555, in sample

    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\nodes.py", line 1488, in common_ksampler

    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\sample.py", line 45, in sample

    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample

    return orig_fn(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1161, in sample

    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1051, in sample

    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1036, in sample

    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 1004, in outer_sample

    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 987, in inner_sample

    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample

    return orig_fn(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 759, in sample

    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 868, in sample_unipc

    x = uni_pc.sample(noise, timesteps=timesteps, skip_type="time_uniform", method="multistep", order=order, lower_order_final=True, callback=callback, disable_pbar=disable)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 715, in sample

    model_prev_list = [self.model_fn(x, vec_t)]

    ^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 410, in model_fn

    return self.data_prediction_fn(x, t)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 394, in data_prediction_fn

    noise = self.noise_prediction_fn(x, t)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 388, in noise_prediction_fn

    return self.model(x, t)

    ^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 329, in model_fn

    return noise_pred_fn(x, t_continuous)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 297, in noise_pred_fn

    output = model(x, t_input, **model_kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 859, in <lambda>

    lambda input, sigma, kwargs: predict_eps_sigma(model, input, sigma, kwargs),

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\extra_samplers\uni_pc.py", line 843, in predict_eps_sigma

    return (input - model(input, sigma_in, **kwargs)) / sigma

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 408, in call

    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 960, in call

    return self.outer_predict_noise(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 967, in outer_predict_noise

    ).execute(x, timestep, model_options, seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 970, in predict_noise

    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 388, in sampling_function

    out = calc_cond_batch(model, conds, x, timestep, model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 206, in calc_cond_batch

    return calccond_batch_outer(model, conds, x_in, timestep, model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 214, in calccond_batch_outer

    return executor.execute(model, conds, x_in, timestep, model_options)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\samplers.py", line 333, in calccond_batch

    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\model_base.py", line 155, in apply_model

    return comfy.patcher_extension.WrapperExecutor.new_class_executor(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\model_base.py", line 194, in applymodel

    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in callimpl

    return forward_call(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 577, in forward

    return comfy.patcher_extension.WrapperExecutor.new_class_executor(

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\patcher_extension.py", line 112, in execute

    return self.original(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 607, in _forward

    return self.forward_orig(x, timestep, context, clip_fea=clip_fea, freqs=freqs, transformer_options=transformer_options, **kwargs)[:, :, :t, :h, :w]

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 564, in forward_orig

    x = block(x, e=e0, freqs=freqs, context=context, context_img_len=context_img_len)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in callimpl

    return forward_call(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 222, in forward

    y = self.self_attn(

    ^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in wrappedcall_impl

    return self._call_impl(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in callimpl

    return forward_call(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\wan\model.py", line 71, in forward

    q, k = apply_rope(q, k, freqs)

    ^^^^^^^^^^^^^^^^^^^^^^^

    File "C:\ComfyUI_Triton_3.11.9\ComfyUI-Nunchaku\ComfyUI-Easy-Install\ComfyUI\comfy\ldm\flux\math.py", line 42, in apply_rope

    xq_out = freqs_cis[..., 0] xq_[..., 0] + freqs_cis[..., 1] xq_[..., 1]

    ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~

    torch.OutOfMemoryError: Allocation on device

    CharlieBrown0115Aug 27, 2025· 4 reactions

    @aneebartist683 bro chill!!! wtf?

    CRAZYAI4UAug 27, 2025· 2 reactions

    @aneebartist683 if you struggle with triton just use Stability Matrix ... or try --normalvram or --lowvram settings and use the GGUF version of the workflow with a model your card can handle. I would try q4 and go up if it works https://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF/tree/main

    dirtysemAug 27, 2025
    CivitAI

    thanks again for the workflows. everything is working fine, the kj error is present, but it does not affect the workflow. I would like to have a separate prompt for each segment, because when you create using text, after the workflow switches to I2V, the T2V prompt loses its relevance and greatly interferes with I2V. In other words, I want to suggest making a separate prompt for t2V that would not be involved in the workflow further. Sorry, English is not my native language, I hope I explained it clearly.

    p.s. How can I set the exact value of total_length? I need 15, she jumps between 14 and 16. And how do I make the segment time more than 5 seconds?

    Daxamur
    Author
    Aug 27, 2025

    For prompting, the "Persistent Positive Prompt" is for static items like characters, locations, etc - things you do not want to change in the video, the segment prompts are for what happens in each segment; ie. Persistent: "A man in a pool." Segment: "He dives underwater. + he jumps out of the water like a dolphin. + he puts on a swim cap." - will result in;

    Prompt 1 (First segment - T2V): A man in a pool. He dives underwater.

    Prompt 2: A man in a pool. he jumps out of the water like a dolphin.

    Prompt 3: A man in a pool. he puts on a swim cap.

    For the total length, the widget is just a bit funky because of how I have to be kind of hacky with the dynamic increment updates - if you slide it down to 1 and then back to 5 it should work fine!

    dirtysemAug 27, 2025· 1 reaction

    @Daxamur  I understand what you want to say about hints. But in T2V, you need to describe everything that is needed for the first segment, i.e. camera movement, character movement, etc. When the phase passes into I2V, the phase itself begins to repeat all the movement of the camera and character that are no longer needed in I2V. For example, I create a girl in a thong who poses sexually and smiles at the camera, while using Lora, which is suitable only for T2V. And when the next phase passes, where the girl, for example, has to suck a penis, then I2V begins to take unnecessary movements of the camera and the character from the T2V prompt. Sorry for the detailed description. But I hope you understand what I'm getting at.

    Daxamur
    Author
    Aug 27, 2025

    @dirtysem I think that is the misunderstanding, the first portion of the segment prompt IS used for the initial T2V segment meaning if your desired prompt is "a girl in a thong who poses sexually and smiles at the camera" for the first video, then your prompt should be;

    Persistent: a girl in a thong
    Segment: who poses sexually and smiles at the camera <-- (This is the second half prompt for your first video) + she starts praying (lmao) <-- (This would be where the second half prompt for your second video is), etc...

    dirtysemAug 27, 2025

    @Daxamur But all segments use I2V? And I need to create a full-fledged scene from T2V using Lora. That's what I'm talking about..

    Daxamur
    Author
    Aug 27, 2025

    @dirtysem The first segment uses T2V, all continuing segments use I2V since T2V has no way to continue a video - does that maybe answer your question? As is there is currently no way to generate a segment with T2V and continue it with T2V, the longest a T2V video can possibly be is 5 seconds (WAN itself breaks down after 5).

    If you are saying you want to make multiple different (not continued) T2V videos, then you just have to set it to one video at 5 seconds and run separately for each T2V video you want!

    dirtysemAug 27, 2025

    @Daxamur  I understood you. Is there just any way to disable the Persistent Positive Prompt? I need it to work the first time and then turn off. Thank you.

    Daxamur
    Author
    Aug 27, 2025

    @dirtysem You can just leave it empty if you don't want to use it, then the first (and every other segment) will just use their proper portion of the segment prompt!

    chazz1meAug 28, 2025
    CivitAI

    Hi! I really love this workflow and how optimized it is! It was running pretty well, but all of a sudden, it's glitching and running the second segment multiple times before performing the upscale and interpolation. I have the segment length to 5 and the overall length to 10 but am ending up with a 18 second video. Any ideas on how to troubleshoot?

    Daxamur
    Author
    Aug 28, 2025

    Hi, this sounds like it's coming from the intelligent face frame detection and drift correction - if you disable that toggle it will output strictly the time you specify!

    meowmeow12345Aug 29, 2025· 2 reactions
    CivitAI

    Maybe consider to add the MOE sampler?

    I'm not PHD so idk how much of a difference it makes. But it calculates the optimal high steps/low steps pass off. Essentially I think it might save time/increase quality cause splitting high/low at 50/50 is supposedly not the optimal method. Should also simplify the workflow as a bonus.

    https://github.com/stduhpf/ComfyUI-WanMoeKSampler/tree/master

    https://www.reddit.com/r/StableDiffusion/comments/1mkv9c6/wan22_schedulers_steps_shift_and_noise/?show=original

    In the meantime I'm gonna try and stick it in myself and likely break everything XD

    meowmeow12345Aug 30, 2025

    Well I tried it and for some reason the processing speed and quality drop a bunch. I think it better to use the concept but not the sampler. Might be better to figure out how to turn the sigmas into the correct steps and input them into your default setup

    kreegunlord015Sep 3, 2025

    @meowmeow12345 I’m not totally sure if it’s how it’s meant to be. I made a bunch of examples and checked their quality side by side, the results look pretty good.
    https://ibb.co/fzJXWxwb

    CRAZYAI4USep 5, 2025

    @kreegunlord015 can you show the examples and what the difference in speed / quality was ?

    AlG80Aug 30, 2025
    CivitAI

    Hi. Which file should be in Load lightx2v (High) and which file should be in Load lightx2v (Low)? I did not find High and Low, there is only lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16. And should these files be different between i2v and t2v?

    Daxamur
    Author
    Aug 30, 2025

    The same lora is actually used for both high and low in these flows - though there is a different lora for I2V and T2V in the latest versions

    AlG80Aug 31, 2025

    @Daxamur OK

    siiiiiixthAug 30, 2025· 3 reactions
    CivitAI

    can u expand how to Endless Looping please, im new beginner confyui user,

    meowmeow12345Aug 31, 2025

    I'm curious too, also with the new First Frame/Last Frame. But when using 1.3, what's the best way to go about it? Like...if you increment, are you supposed to extract the last frame and use it as the first? I feel like at least for img2vid, 10s is the absolute max before the identity of the image starts to seriously degrade...

    Psi_CloneAug 30, 2025
    CivitAI

    It will be really awesome if we get a First Frame last frame Workflow!

    Daxamur
    Author
    Aug 30, 2025

    Up now actually, haha!

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    776
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/23/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    daxamursWAN22WorkflowsV13TrueEndless_i2vV131GGUF.zip

    daxamursWAN22WorkflowsV13Endless_BETAI2VV13GGUF.zip

    daxamursWAN22WorkflowsV121FLF2VT2V_ExpI2VV131GGUF.zip