CivArchive

    Daxamur's Wan 2.2 Workflows

    If you'd like to support me, check out my Patreon!
    DM to inquire about custom projects.


    -NEWS-

    Responses are delayed as I'm heads down working on getting my next release ready for you all - once released, responses will go back to normal!

    v1.2.1 Out Now! - Update to DaxNodes via ComfyUI manager required

    • FLF2V added with GGUF support - no new models required

    • Fixed ability to independently disabled / enable upscaling and interpolation

    • Dedicated resolution picker nodes, added auto-resizing functionality from v1.3.1 to I2V and FLF2V


    DaxNodes now available via ComfyUI Manager, no more git clone required!


    Current Tracked Bugs:

    • KJNodes Get / Set reporting a missing error for some users, if this happens - ensure you download the latest version of DaxNodes from ComfyUI manager, and re-import the workflow! - In progress


    If you see a "FileNotFoundError ([WinError 2] The system cannot find the file specified.)" from VideoSave or other video-related nodes, FFmpeg is missing or not in your system PATH.

    • Setup (Full Version Required):

    • Download the full FFmpeg build

    • Extract it to a stable location (e.g., C:\ffmpeg).

    • Add C:\ffmpeg\bin to your system PATH:

    • Open Edit the system environment variables -> Environment Variables....

    • Under System variables, select Path -> Edit....

    • Click New and add C:\ffmpeg\bin.

    • Save and exit.

    Restart ComfyUI (and your terminal/command prompt).

    After this, everything should work!


    v1.3.1 Features

    Segment-Based Prompting

    • Persistent Positive Prompt: Keeps consistent details across the entire video (ie. “A woman with green eyes and brown hair in her warmly lit bedroom”).

    • Segment Positive Prompts: Separated with +, one per segment length (ie. “She is writing in a journal + She closes the journal and stands up + She walks away”).

    • Gives you far more control in long-form videos and helps reduce WAN’s tendency to render weird camera movements or jutters on I2V start.

    Endless-Style Looping

    • Segments can chain "infinitely" (I capped the node at 9999), creating effectively endless loops.

    • The Video Execution ID manages overwrites and stitching - just increment the ID as you generate new sequences.

    Streaming RIFE VFI + Upscaling

    • Tweaked RIFE VFI and upscaling now stream frames instead of holding entire sequences in VRAM/RAM.

    • Allows much longer videos, smoother interpolation, and sharper upscales without OOM errors.

    Face Detection & Drift Correction

    • Intelligent Mediapipe face frame detection locks focus on characters.

    • Drift correction ensures the final video runs at least as long as requested - but instead of cutting mid-generation, it will add full extra segments until the target framecount is met or exceeded.

    • This way, no generated frames are wasted, and you always end up with smooth, complete segments.

    • Fully toggleable, with adjustable frame look-back settings.

    Resolution Handling

    • T2V: Standard WAN resolution presets with optional overrides.

    • I2V: Input image scales to WAN-native resolutions, preserving aspect ratio. “Native” passthrough supported.

    QoL & Management

    • Toggle upscaling/interpolation independently.

    • Temp file output organized by execution ID - clear /output/.tmp/ periodically to save space.

    Looking Ahead

    This workflow is still experimental , future versions will expand on segment control, smarter handling of motion/camera behavior, more adaptive face tracking, and even integration of audio/video for cinematic sequences. Big things are coming!


    Notes

    I've done my best to place most nodes that you'd want to configure at the lower portion of the flow (roughly) sequentially, while most of the operational / backend stuff sits at the top. Nodes have been labeled according to their function as clearly as possible.

    Beyond that;

    • NAG Attention is in use, so it is recommended to leave the CFG set to 1.

    • The sampler and scheduler are set to uni_pc // simple by default as I find this is the best balance of speed and quality. (1.1> Only) If you don't mind waiting (a lot, in my experience) longer for some slightly better results, then I'd recommend res_3s // bong_tangent from the RES4LYF custom node.

    • I have set the default number of steps to 8 (4 steps per sampler) as opposed to 4, as here is where I see the most significant quality / time tradeoff - but this is really up to your preference.

    • This flow will save finished videos to ComfyUI/output/WAN/<T2V|T2I|I2V>/ by default.

    I2V

    • The custom node flow2-wan-video will cause a conflict with the Wan image to video node and must be removed to work. I have found that this node does not get completely removed from the custom_nodes folder when removing via the ComfyUI manager, so this must be deleted manually.

    GGUF

    • All models used with the GGUF versions of the flows are the same with the exception of the base high and low noise model. You will need to determine which GGUF quant best fits your system, and then set the correct model in each respective Load WAN 2.2 GGUF node accordingly. As a rule of thumb, ideally your GGUF model should fit within your VRAM with a few GB to spare.

    • The examples for the GGUF flows were created using the Q6_K quant of WAN 2.2 I2V and T2V.

    • The WAN 2.2 GGUF quants tested with this flow come from the following locations on huggingface;

    MMAUDIO

    • To set up MMAUDIO, you must download the MMAUDIO models below, create an "mmaudio" folder in your models directory (ComfyUI/models/mmaudio), and place every mmaudio model downloaded into this folder (even apple_DFN5B-CLIP-ViT-H-14-384_fp16.safetensors).

    Block Swap Flows

    • Being discontinued as I have found that the native ComfyUI memory swapping conserves more memory and slows down the process less in my testing. If you receive OOM with the base v1.2 flows, I'd recommend trying out the GGUF versions!

    Triton and SageAttention Issues

    • The most frequent issues I see users encounter are related to the installation of Triton and SageAttention - and while I'm happy to help out as much as I can, I am but one man and can't always get to everyone in a reasonable time. Luckily, @CRAZYAI4U has pointed me to Stability Matrix which can auto-deploy ComfyUI and has a dedicated script for installing Triton and SageAttention.

    • You will first need to download Stability Matrix from their repository, and download ComfyUI via their hub. Once ComfyUI has been deployed via the hub, click the three horizontal dots to the top left of the ComfyUI instance's entry, select "Package Commands" and then "Install Triton and SageAttention". Once complete, you should be able to import the flow, install any missing dependencies via ComfyUI manager, drop in your models and start generating!

    • Will spin up a dedicated article with screenshots on this soon.

    Models Used

    T2V (Text to Video)

    I2V (Image to Video)

    MMAUDIO

    Non-Native Custom_Nodes Used

    Description

    Triple sampling for enhanced quality, prompt adherence and motion.

    FAQ

    Comments (29)

    StiryaAug 14, 2025· 2 reactions
    CivitAI

    really nice and clean workflow, The best one at the moment tbh.
    Few questions though.
    Is nag attention actually working when I set CFG to 1?
    because Lightx2v needs CFG 1 and with that the negative prompt will be ignored.
    Also, do you think it's doable to do a Wan 2.2 5B upscaler section in the workflow? like what we did with wan 2.1 1.3B.

    Daxamur
    Author
    Aug 14, 2025· 1 reaction

    Thanks, I appreciate it!

    That's right, NAG attention is actually designed to provide negative guidance while keeping the CFG set to 1. + I'll have to look into it, but I'm sure it's possible!

    meowmeow12345Aug 14, 2025· 2 reactions
    CivitAI

    Yea thanks again this is awesome! Btw is there a i2v looping workflow in the works perchance? o.o;

    I uploaded a vid made with your workflow too, but no idea when or if it will show. Seems really slow with "analyzing"...

    meowmeow12345Aug 14, 2025· 1 reaction

    PS, using KJ nodes "color match" really helps with flickering issue I was having. Idk but maybe something to consider adding. I stuck it before the interpolation video save...but not knowledgeable enough to know without experimenting if it makes sense to stick one before the first save as well.

    Daxamur
    Author
    Aug 14, 2025· 1 reaction

    There is indeed one in the works currently, it's next on my list after the triple sampler motion flow - I'll drop a response to this comment when they're up!

    + Yeah, I feel your pain on that, it's taken hours before with a few of mine haha

    meowmeow12345Aug 14, 2025· 1 reaction

    Also I stuck a vram debug between the high and low pass and put all the options on...maybe it speeds things up, or not...idk for sure, but seems to give some more memory than it would otherwise...no time to test tbh I gotta make waifu's >.>;;;

    VRAMdebug: free memory before: 19,824,128,018

    VRAMdebug: free memory after: 23,915,003,848

    VRAMdebug: freed memory: 4,090,875,830

    StiryaAug 14, 2025· 1 reaction

    meowmeow12345 Putting the color match node right after the vae decode is a better idea tbh, as the upscaling and the interpolating sections don't affect colors (atleast in my tests).
    plus, color matching a high-res interpolated 48 fps video is a no no for the poor cpu.

    hydragyrum2Aug 15, 2025· 3 reactions
    CivitAI

    Would love to try the T2V and T2V gguf workflows but I keep getting prompt execution failed errors: Prompt outputs failed validation:
    String to Float:
    -Required inut is missing: String
    String to Float:
    -Required input is missing: String

    Cannot find where the error is occurring in the workflow. :(

    Daxamur
    Author
    Aug 16, 2025

    This sounds like you may either be missing the ComfyLiterals or comfyui_essentials custom nodes, or potentially have another custom node that conflicts with one of them

    hydragyrum2Aug 16, 2025

    Seems like that worked but now I'm getting the following error:

    "Loading aborted due to error reloading workflow data

    TypeError: (intermediate value).removeWidget is not a function"

    I got the workflow to work once and then this error started occurring. Sorry for the trouble!

    cgsthrasher726Aug 23, 2025

    @hydragyrum2 same!

    vellaneAug 26, 2025

    I have ComfyLiterals and comfyui_essentials installed, neither got any conflict with an installed node...

    StiryaAug 15, 2025· 2 reactions
    CivitAI

    The I2V 1.2 does have better prompt adherence and motion by a lot, really nice. But the the quality is worse, no idea why, maybe my images aren't high quality enough, maybe they have too much details and I expected far above what my PC is capable of.
    Hope Lightx2v team find a solution for the slow-mo soon.
    Also my gen time is longer by 2-8 mins now, which is too random to be normal tbh

    Daxamur
    Author
    Aug 15, 2025

    That is definitely not normal haha - Would you mind passing me some of the images you tried so that I can see if I experience the same degradation?

    Daxamur
    Author
    Aug 15, 2025

    I am starting to see this behavior in prompts with more dramatic motion, I'll dig into this

    StiryaAug 16, 2025· 1 reaction

    Daxamur after like 5 hours of testing, I think I found two main culprits, memory usage, and me... (I'm dumb)
    about the quality, I tested more styles and I need to say, I might over-exaggerated a bit when I said the quality is worse, I think wan isn't familiar with the style I tested it before with (same style as the video I put in the gallery). If that is the case, Im so sorry.

    Talking about the long gen time, found out that when the ram is almost 100% used, the first step of 1st and 3rd sampler will take double if not triple or more than the average time for me. Which is weird because 1.1v didn't max out my ram.
    (In 1.1v, the average time for 1 step is 1min, if ram maxed in 1.2v, it'll be 2-4min for the 1st step).

    (note: For some unknown reason, the steps in the first sampler take double the time (2min) of other samplers (1min), even if the ram wasn't used 100%).

    As for the randomness of gen time, it was because I was comparing the time of first gen after launching comfy (didn't max out the ram) and the rest of gens after (maxed the ram).
    (FYI: models I used, Q4_K_M. Specs: RTX3060 12GB, 32 GB ram)

    TL;DR: 1-memory optimization issues. 2-I didn't test properly before.

    StiryaAug 16, 2025· 1 reaction

    Daxamur Also you might wanna checkout magcache if you don't know about it. They say it is a better teacache which is interesting. No idea how to implement it so the credit is yours :)

    Daxamur
    Author
    Aug 16, 2025

    Stirya Thanks for the follow up! Glad to hear it, I was banging my head against some very specific examples of issues that were brought up to me, and I think at this point the problems I was hitting either have to be a limitation of WAN itself or the selection of lightx2v and adjacent loras I tested in their current state. Overall I find I prefer the results in the latest version the majority of the time for sure

    CharlieBrown0115Aug 16, 2025· 4 reactions
    CivitAI

    Hi, thanks for your hard work on these workflows. I have a question — in the i2v1.2 workflow, is there any way to automatically scale the images and choose the video resolution? Since I only work with i2v, all my images have different sizes, and I have to find which aspect ratio fits the image I’m working with, which often causes the final video to be stretched or squashed (or the opposite) in any direction.

    And the other question — what is mmaudio? This is the first time I’ve heard of it.

    Hi, I also wanted to tell you that I work a lot with first-frame/last-frame workflows. I have one that gives me good results, but it’s a bit slow. I’d love to see one of your first-frame/last-frame workflows — just from looking at the i2v1.2 workflow, I know you’d make something spectacular. Of course, only if you want to and have the time.

    Daxamur
    Author
    Aug 16, 2025· 1 reaction

    The automatic scaling is definitely on my list as it has been killing me as well haha, and MMAUDIO is audio synthesis that I've tied in to the workflow's video generation - the results range from decent to interesting, but it's a lot of fun to use!

    First / Last Frame and looping flows are definitely in the works as well

    CharlieBrown0115Aug 27, 2025

    @Daxamur What is the reason you use 3 KSamplers instead of 2? Is there a way to use only 2 without compromising the quality of the generated clips? I think that using 2 KSamplers would increase the video generation speed, right? btw i just tried your latest i2v workflow, and I'm getting all the bugs, couldn't fix them, ill guess ill wait for the latest update of this workflow with all the bugs fixed

    kreegunlord015Aug 16, 2025· 2 reactions
    CivitAI

    Not sure if I'm messing something up, but I tried using res_xx/bong in the three samplers, and the result was all distorted. However, Euler/beta57 works great for me.

    Daxamur
    Author
    Aug 16, 2025· 1 reaction

    There's a good chance you didn't mess anything up - that res_x // bong _tangent recommendation comes from earlier versions of the flow, I actually haven't tested with those on v1.2 myself. I'll update the model page to specify the version. Glad to hear euler // beta is working well for you!

    LxmAug 16, 2025· 5 reactions
    CivitAI

    One of the best workflows, thank you!!

    Daxamur
    Author
    Aug 18, 2025

    Thanks - I'm glad you like it!

    meowmeow12345Aug 16, 2025· 3 reactions
    CivitAI

    The magcache guys has a comfyui node already, but limited to 2.1 for now. I gave it a shot anyway:
    https://freeimage.host/i/FbhIxZF

    But uh...I'm not really sure why but the first instance of the node bugs out. Maybe there need to be three separate pipelines vs only 1 low and 1 high? Not sure...I skipped the first sampler and was able to get it on for the 2nd and 3rd. Well, worth to check out probably, for sure when the 2.2 version is out pretty soon I think

    Daxamur
    Author
    Aug 16, 2025

    Yeah, unfortunately the steps are too low in this flow for teacache / magcache to provide any benefit (or not break entirely). From a high level, they need a certain number of steps to build a sufficient cache, and then additional steps to provide any benefit from using that cache - too few steps and you'll get artifacts, degradation or just break it entirely.

    In this flow, with each sampler using such a low step count, combined with the fact the model is different in each sampler - tea / mag will likely just end up destroying the latent

    meowmeow12345Aug 16, 2025· 1 reaction

    Oh I see...hmm...may I ask when you do you runs, what kind of it/s are you getting? I realized my sageattention is all kinds of f'd up and I've been working wit chatgpt diligently for hours now LOL

    Daxamur
    Author
    Aug 16, 2025· 1 reaction

    meowmeow12345 I'll have to check on exact numbers again here, but I currently have an RTX 5090 + 128gb RAM - from memory, the two initial steps take 40-50ish s/it each, and the remaining steps 20-30ish s/it each. This flow takes me roughly 6 - 8 minutes to execute in full for a 5 second video (81 frames pre-interpolation - the base video takes something like 1 minute per second of runtime to generate, maybe slightly over) with the included configuration depending on what else I have running. From memory, the two initial steps take 40-50ish s/it each, and the remaining steps 20-30ish s/it each.

    + You're definitely not alone when it comes to Triton / Sage issues - I've updated the WF description here with a method of deployment that automates this that @CRAZYAI4U pointed me to, in case you and ChatGPT are unable to figure it out!

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    571
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/15/2025
    Updated
    5/13/2026
    Deleted
    -