CivArchive

    Daxamur's Wan 2.2 Workflows

    If you'd like to support me, check out my Patreon!
    DM to inquire about custom projects.


    -NEWS-

    Responses are delayed as I'm heads down working on getting my next release ready for you all - once released, responses will go back to normal!

    v1.2.1 Out Now! - Update to DaxNodes via ComfyUI manager required

    • FLF2V added with GGUF support - no new models required

    • Fixed ability to independently disabled / enable upscaling and interpolation

    • Dedicated resolution picker nodes, added auto-resizing functionality from v1.3.1 to I2V and FLF2V


    DaxNodes now available via ComfyUI Manager, no more git clone required!


    Current Tracked Bugs:

    • KJNodes Get / Set reporting a missing error for some users, if this happens - ensure you download the latest version of DaxNodes from ComfyUI manager, and re-import the workflow! - In progress


    If you see a "FileNotFoundError ([WinError 2] The system cannot find the file specified.)" from VideoSave or other video-related nodes, FFmpeg is missing or not in your system PATH.

    • Setup (Full Version Required):

    • Download the full FFmpeg build

    • Extract it to a stable location (e.g., C:\ffmpeg).

    • Add C:\ffmpeg\bin to your system PATH:

    • Open Edit the system environment variables -> Environment Variables....

    • Under System variables, select Path -> Edit....

    • Click New and add C:\ffmpeg\bin.

    • Save and exit.

    Restart ComfyUI (and your terminal/command prompt).

    After this, everything should work!


    v1.3.1 Features

    Segment-Based Prompting

    • Persistent Positive Prompt: Keeps consistent details across the entire video (ie. “A woman with green eyes and brown hair in her warmly lit bedroom”).

    • Segment Positive Prompts: Separated with +, one per segment length (ie. “She is writing in a journal + She closes the journal and stands up + She walks away”).

    • Gives you far more control in long-form videos and helps reduce WAN’s tendency to render weird camera movements or jutters on I2V start.

    Endless-Style Looping

    • Segments can chain "infinitely" (I capped the node at 9999), creating effectively endless loops.

    • The Video Execution ID manages overwrites and stitching - just increment the ID as you generate new sequences.

    Streaming RIFE VFI + Upscaling

    • Tweaked RIFE VFI and upscaling now stream frames instead of holding entire sequences in VRAM/RAM.

    • Allows much longer videos, smoother interpolation, and sharper upscales without OOM errors.

    Face Detection & Drift Correction

    • Intelligent Mediapipe face frame detection locks focus on characters.

    • Drift correction ensures the final video runs at least as long as requested - but instead of cutting mid-generation, it will add full extra segments until the target framecount is met or exceeded.

    • This way, no generated frames are wasted, and you always end up with smooth, complete segments.

    • Fully toggleable, with adjustable frame look-back settings.

    Resolution Handling

    • T2V: Standard WAN resolution presets with optional overrides.

    • I2V: Input image scales to WAN-native resolutions, preserving aspect ratio. “Native” passthrough supported.

    QoL & Management

    • Toggle upscaling/interpolation independently.

    • Temp file output organized by execution ID - clear /output/.tmp/ periodically to save space.

    Looking Ahead

    This workflow is still experimental , future versions will expand on segment control, smarter handling of motion/camera behavior, more adaptive face tracking, and even integration of audio/video for cinematic sequences. Big things are coming!


    Notes

    I've done my best to place most nodes that you'd want to configure at the lower portion of the flow (roughly) sequentially, while most of the operational / backend stuff sits at the top. Nodes have been labeled according to their function as clearly as possible.

    Beyond that;

    • NAG Attention is in use, so it is recommended to leave the CFG set to 1.

    • The sampler and scheduler are set to uni_pc // simple by default as I find this is the best balance of speed and quality. (1.1> Only) If you don't mind waiting (a lot, in my experience) longer for some slightly better results, then I'd recommend res_3s // bong_tangent from the RES4LYF custom node.

    • I have set the default number of steps to 8 (4 steps per sampler) as opposed to 4, as here is where I see the most significant quality / time tradeoff - but this is really up to your preference.

    • This flow will save finished videos to ComfyUI/output/WAN/<T2V|T2I|I2V>/ by default.

    I2V

    • The custom node flow2-wan-video will cause a conflict with the Wan image to video node and must be removed to work. I have found that this node does not get completely removed from the custom_nodes folder when removing via the ComfyUI manager, so this must be deleted manually.

    GGUF

    • All models used with the GGUF versions of the flows are the same with the exception of the base high and low noise model. You will need to determine which GGUF quant best fits your system, and then set the correct model in each respective Load WAN 2.2 GGUF node accordingly. As a rule of thumb, ideally your GGUF model should fit within your VRAM with a few GB to spare.

    • The examples for the GGUF flows were created using the Q6_K quant of WAN 2.2 I2V and T2V.

    • The WAN 2.2 GGUF quants tested with this flow come from the following locations on huggingface;

    MMAUDIO

    • To set up MMAUDIO, you must download the MMAUDIO models below, create an "mmaudio" folder in your models directory (ComfyUI/models/mmaudio), and place every mmaudio model downloaded into this folder (even apple_DFN5B-CLIP-ViT-H-14-384_fp16.safetensors).

    Block Swap Flows

    • Being discontinued as I have found that the native ComfyUI memory swapping conserves more memory and slows down the process less in my testing. If you receive OOM with the base v1.2 flows, I'd recommend trying out the GGUF versions!

    Triton and SageAttention Issues

    • The most frequent issues I see users encounter are related to the installation of Triton and SageAttention - and while I'm happy to help out as much as I can, I am but one man and can't always get to everyone in a reasonable time. Luckily, @CRAZYAI4U has pointed me to Stability Matrix which can auto-deploy ComfyUI and has a dedicated script for installing Triton and SageAttention.

    • You will first need to download Stability Matrix from their repository, and download ComfyUI via their hub. Once ComfyUI has been deployed via the hub, click the three horizontal dots to the top left of the ComfyUI instance's entry, select "Package Commands" and then "Install Triton and SageAttention". Once complete, you should be able to import the flow, install any missing dependencies via ComfyUI manager, drop in your models and start generating!

    • Will spin up a dedicated article with screenshots on this soon.

    Models Used

    T2V (Text to Video)

    I2V (Image to Video)

    MMAUDIO

    Non-Native Custom_Nodes Used

    Description

    Triple sampling for enhanced quality, prompt adherence and motion.

    FAQ

    Comments (18)

    noonesuspectst3hbutterflyAug 16, 2025· 9 reactions
    CivitAI

    For anyone looking for really stable settings for sampling:

    Model sample ( shift high / low ) = 9

    uni_pc or 2res
    sgm_uniform
    steps ( 8 - 20 )

    I used Reayslf and the docs to get perfect 50/50 sampling on high low @ 90% cutoff for wan2.2 ( A14 )

    Not tested for GUFF

    Daxamur
    Author
    Aug 18, 2025· 1 reaction

    Thanks for posting! + Will do some testing and make adjustments to the included values for v1.3

    CharlieBrown0115Aug 20, 2025

    Daxamur cant wait!!!

    quieftian01374Aug 17, 2025· 2 reactions
    CivitAI

    i wonder if my 3070 can use this lol

    MisciAcciiAug 17, 2025

    Here i'm gonna try with my GTX 1660 Super

    Daxamur
    Author
    Aug 17, 2025

    Let me know how it goes! You could probably get GGUF Q3_K working - I don't know about the quality though unfortunately.

    SqueezesAug 17, 2025· 3 reactions
    CivitAI

    I just want to sincerely thank you for taking the time to actually link all the models and nodes you used in the workflow.

    Daxamur
    Author
    Aug 18, 2025

    Anytime, I'm glad it was helpful - I definitely know the pain of having to hunt down a bunch of models, wanted to make sure anyone using this didn't have to go through that haha

    veebeeAug 19, 2025
    CivitAI

    is there a way to do interpolation without using the upscaler in this workflow cause the interpolation can only works if the upscaler node is on?

    Daxamur
    Author
    Aug 19, 2025

    This was actually a mistake in the workflow setup on my part - it will be fixed in the next version soon!

    veebeeAug 21, 2025

    @Daxamur oh i see thank you. can't wait for the next version

    meowmeow12345Aug 19, 2025
    CivitAI

    Regarding blow swap flow, which module did you use in your testing, and between which nodes did you stick it in (kek)? I tried to throw one in myself for i2v but they all threw errors. I'm curious if it helps for users with less vram (I'm on a 4090).

    meowmeow12345Aug 21, 2025
    CivitAI

    I think "Resolution Master" is the key for easy video resizing in comfyui, it's pretty awesome, and hook it up to kjresize v2

    cgsthrasher726Aug 23, 2025· 1 reaction
    CivitAI

    Great workflow I had this work perfectly once now I get ...

    Prompt execution failed

    Prompt outputs failed validation: String to Float: - Required input is missing: String String to Float: - Required input is missing: String

    error im not sure how to connect the nodes?

    Daxamur
    Author
    Aug 24, 2025

    Hi! This could be one of a few things, accidentally breaking the connection or a potential custom node conflict - there were also a few bugs that I had to fix in v1.3 if that is the version giving you this issue, a fix has been uploaded!

    contactsadiqhere273Aug 23, 2025· 1 reaction
    CivitAI

    im not beaing able to generate sounds. can someone explain why?

    Daxamur
    Author
    Aug 24, 2025· 1 reaction

    Is there an error you're receiving, or is the audio generation outputting silence?

    audio generation outputting silence, also couldn't find any node related to audio in the workflow

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    300
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/16/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    daxamursWAN22WorkflowsV121FLF2VT2V_OldT2VV12GGUF.zip