CivArchive

    Daxamur's Wan 2.2 Workflows

    If you'd like to support me, check out my Patreon!
    DM to inquire about custom projects.


    -NEWS-

    Responses are delayed as I'm heads down working on getting my next release ready for you all - once released, responses will go back to normal!

    v1.2.1 Out Now! - Update to DaxNodes via ComfyUI manager required

    • FLF2V added with GGUF support - no new models required

    • Fixed ability to independently disabled / enable upscaling and interpolation

    • Dedicated resolution picker nodes, added auto-resizing functionality from v1.3.1 to I2V and FLF2V


    DaxNodes now available via ComfyUI Manager, no more git clone required!


    Current Tracked Bugs:

    • KJNodes Get / Set reporting a missing error for some users, if this happens - ensure you download the latest version of DaxNodes from ComfyUI manager, and re-import the workflow! - In progress


    If you see a "FileNotFoundError ([WinError 2] The system cannot find the file specified.)" from VideoSave or other video-related nodes, FFmpeg is missing or not in your system PATH.

    • Setup (Full Version Required):

    • Download the full FFmpeg build

    • Extract it to a stable location (e.g., C:\ffmpeg).

    • Add C:\ffmpeg\bin to your system PATH:

    • Open Edit the system environment variables -> Environment Variables....

    • Under System variables, select Path -> Edit....

    • Click New and add C:\ffmpeg\bin.

    • Save and exit.

    Restart ComfyUI (and your terminal/command prompt).

    After this, everything should work!


    v1.3.1 Features

    Segment-Based Prompting

    • Persistent Positive Prompt: Keeps consistent details across the entire video (ie. “A woman with green eyes and brown hair in her warmly lit bedroom”).

    • Segment Positive Prompts: Separated with +, one per segment length (ie. “She is writing in a journal + She closes the journal and stands up + She walks away”).

    • Gives you far more control in long-form videos and helps reduce WAN’s tendency to render weird camera movements or jutters on I2V start.

    Endless-Style Looping

    • Segments can chain "infinitely" (I capped the node at 9999), creating effectively endless loops.

    • The Video Execution ID manages overwrites and stitching - just increment the ID as you generate new sequences.

    Streaming RIFE VFI + Upscaling

    • Tweaked RIFE VFI and upscaling now stream frames instead of holding entire sequences in VRAM/RAM.

    • Allows much longer videos, smoother interpolation, and sharper upscales without OOM errors.

    Face Detection & Drift Correction

    • Intelligent Mediapipe face frame detection locks focus on characters.

    • Drift correction ensures the final video runs at least as long as requested - but instead of cutting mid-generation, it will add full extra segments until the target framecount is met or exceeded.

    • This way, no generated frames are wasted, and you always end up with smooth, complete segments.

    • Fully toggleable, with adjustable frame look-back settings.

    Resolution Handling

    • T2V: Standard WAN resolution presets with optional overrides.

    • I2V: Input image scales to WAN-native resolutions, preserving aspect ratio. “Native” passthrough supported.

    QoL & Management

    • Toggle upscaling/interpolation independently.

    • Temp file output organized by execution ID - clear /output/.tmp/ periodically to save space.

    Looking Ahead

    This workflow is still experimental , future versions will expand on segment control, smarter handling of motion/camera behavior, more adaptive face tracking, and even integration of audio/video for cinematic sequences. Big things are coming!


    Notes

    I've done my best to place most nodes that you'd want to configure at the lower portion of the flow (roughly) sequentially, while most of the operational / backend stuff sits at the top. Nodes have been labeled according to their function as clearly as possible.

    Beyond that;

    • NAG Attention is in use, so it is recommended to leave the CFG set to 1.

    • The sampler and scheduler are set to uni_pc // simple by default as I find this is the best balance of speed and quality. (1.1> Only) If you don't mind waiting (a lot, in my experience) longer for some slightly better results, then I'd recommend res_3s // bong_tangent from the RES4LYF custom node.

    • I have set the default number of steps to 8 (4 steps per sampler) as opposed to 4, as here is where I see the most significant quality / time tradeoff - but this is really up to your preference.

    • This flow will save finished videos to ComfyUI/output/WAN/<T2V|T2I|I2V>/ by default.

    I2V

    • The custom node flow2-wan-video will cause a conflict with the Wan image to video node and must be removed to work. I have found that this node does not get completely removed from the custom_nodes folder when removing via the ComfyUI manager, so this must be deleted manually.

    GGUF

    • All models used with the GGUF versions of the flows are the same with the exception of the base high and low noise model. You will need to determine which GGUF quant best fits your system, and then set the correct model in each respective Load WAN 2.2 GGUF node accordingly. As a rule of thumb, ideally your GGUF model should fit within your VRAM with a few GB to spare.

    • The examples for the GGUF flows were created using the Q6_K quant of WAN 2.2 I2V and T2V.

    • The WAN 2.2 GGUF quants tested with this flow come from the following locations on huggingface;

    MMAUDIO

    • To set up MMAUDIO, you must download the MMAUDIO models below, create an "mmaudio" folder in your models directory (ComfyUI/models/mmaudio), and place every mmaudio model downloaded into this folder (even apple_DFN5B-CLIP-ViT-H-14-384_fp16.safetensors).

    Block Swap Flows

    • Being discontinued as I have found that the native ComfyUI memory swapping conserves more memory and slows down the process less in my testing. If you receive OOM with the base v1.2 flows, I'd recommend trying out the GGUF versions!

    Triton and SageAttention Issues

    • The most frequent issues I see users encounter are related to the installation of Triton and SageAttention - and while I'm happy to help out as much as I can, I am but one man and can't always get to everyone in a reasonable time. Luckily, @CRAZYAI4U has pointed me to Stability Matrix which can auto-deploy ComfyUI and has a dedicated script for installing Triton and SageAttention.

    • You will first need to download Stability Matrix from their repository, and download ComfyUI via their hub. Once ComfyUI has been deployed via the hub, click the three horizontal dots to the top left of the ComfyUI instance's entry, select "Package Commands" and then "Install Triton and SageAttention". Once complete, you should be able to import the flow, install any missing dependencies via ComfyUI manager, drop in your models and start generating!

    • Will spin up a dedicated article with screenshots on this soon.

    Models Used

    T2V (Text to Video)

    I2V (Image to Video)

    MMAUDIO

    Non-Native Custom_Nodes Used

    Description

    • Requires update to DaxNodes

    • FLF2V Support

    FAQ

    Comments (88)

    TheSanityInspectorAug 30, 2025
    CivitAI

    Looks great and I look forward to trying it. But, why did I get six notifications in a row for it in Civitai's Updates tab?

    keweje1485771Aug 31, 2025

    im getting them as well but its all working.

    BubbleHashSep 1, 2025

    Because he updated all of the workflows, for I2V, T2V, GGUF, etc.

    Era1701Sep 2, 2025
    CivitAI

    Hey! Remember me? That guy on Reddit? If you have any new workflow that you need to test, please feel free to contact me!

    nicefrog77Sep 2, 2025
    CivitAI

    I'm trying to understand the process, but I've tried everything and I can't get the loop to work. It just extends the video. i using version i2v 1.3.1 GGUF

    EreintaxSep 2, 2025
    CivitAI

    Really good

    j_guilhem3902Sep 2, 2025
    CivitAI

    Wow ! This is really impressive ! Such a complete workflow with very good ideas and great implementation ! Thanks a lot for sharing this ;-)
    I worked myself on a 3 sampler multi-segments workflow but integrating the face id for last frame selection is genius !
    The only limitation for me is not being able to select a different lora per segment. Combining all loras can create a mess with unwanted behavior in the segments we don't need them. Except from that, that's a piece of art !

    QuodCausisSep 2, 2025· 1 reaction
    CivitAI

    These are prob the best workflows out there now, including Kijais.

    Am getting node missing error:

    ResolutionPickerFLF2V


    im664Sep 3, 2025· 1 reaction

    The missing node package is DaxNode. It is listed above but took me a while to suss out myself

    kreegunlord015Sep 3, 2025
    CivitAI

    If you want to follow the boundaries while using Dax's workflow, use the MOE sampler instead of changing the values.

    I’m not entirely sure if this is the right approach; it might need some adjustments, but it’s working for me. I’ll wait for Dax to confirm.
    https://ibb.co/fzJXWxwb

    meowmeow12345Sep 3, 2025

    There is something wrong with the MOE sampler. The quality of the animation and detail is severely degraded vs using 2 or 3 separate high/low samplers. Even the creator of the MOE sampler acknowledged the issue in his Reddit post.

    kreegunlord015Sep 4, 2025

    @meowmeow12345 I don't see any issues on my end, but yes, the node is still in early stages, so I don't recommend using it yet. I reverted to the original node and did my own calculations. Also, you cannot mix samplers and schedulers, you can only use one.

    SisanaSep 4, 2025
    CivitAI

    Amazing workflows thanks for sharing them. Does anyone have a solution in I2V / FLF2V where the brightness / contrast gets higher as the video goes on? Have been trying to create seamless loops but it's pretty noticeable.

    ImbriumSep 4, 2025
    CivitAI

    Segmented approach is SO good. Not only do the workflows work well but they also are clean and organized in a way for me to actually learn. You are the best.

    kevenggg868Sep 6, 2025
    CivitAI

    I haven't run the wf yet, but this is the most well organized wf i've seen. Respect :-)

    EnragedAntelopeSep 6, 2025
    CivitAI

    Thank you for making and sharing! I just tested T2V 1.3.1 (exp) twice. In the segment positive prompts, I used:
    "Bunnies playfully hop out from behind rocks. + A fox walks toward the stream and starts poking his head in to catch fish. + A bear emerges from the woods to hop with the bunnies. "

    with segment length 5 and total length 15.


    However there was no fox or bear - not sure if there is a bug somewhere that prevents parsing? I thought I followed instructions correctly by using the + and having 3 total segments (default settings). I made no changes to the wf so wanted to make you aware of that potential bug.

    Psi_CloneSep 7, 2025
    CivitAI

    Hey, just a thought, but can we have something like this implemented for proper frame-by-frame animation?

    https://huggingface.co/TencentARC/ToonComposer

    I mean, we already have a First - Last frame workflow, this will just have an option to add in-between frames with the timestamp option, that when this will occur in the generation.

    Edit - I have used FLF2V workflow and it is awesome - this will basically enhance and give much more detailed controlled for that workflow

    vusyrvisehievhSep 8, 2025
    CivitAI

    I have strange behavior I stuck into. Im trying I2V 10 sec (5 sec segm). First segment was generated as I see through console, but only last frame of it was saved and execution stopped. Maybe all segments was generated but still no error or result files, or temp files aside from Find and Encode Keyframe Selected Frame Node

    vusyrvisehievhSep 16, 2025

    Update
    Image as result was due wring connection to FPS nodes. They basically set FPS to 0.
    I was able to run workflow, but Upscale / Interpolation not working because another node do not like Image input it gets for some reason

    ezolorSep 9, 2025
    CivitAI

    Would it be possible to do loras by segment as well?

    ezolorSep 9, 2025

    Or honestly even just if its easier to add initial segment loras vs subsequent segment loras

    ezolorSep 9, 2025

    Okay found out that was possible by redirecting the segment get/setters

    TonySSSSep 10, 2025
    CivitAI

    Nicely made and very clean workflow, however I personally have an issue with the frame interpolator node as it always seems to crash ComfyUI for some reason ('killed' output in console), therefore I had to disable it, but the rest such as upscaling works fine. Also that interpolator node worked fine in other workflows I found outside of this site.

    ImbriumSep 11, 2025

    By default my frame interp node only had RIFE49, but RIFE47 was chosen by default. Either change it to RIFE49 or download RIFE47

    ImbriumSep 11, 2025

    Also if you find interp still has issues it may be due to memory/contraints. Try running it with Interp turned off to validate

    TonySSSSep 11, 2025

    @DrPiePie It works with Interpolation turned off, and in other workflows which has that node it worked with RIFE47, so I assume it's probably my GPU the issue for some workflows, RTX 3060 12GB

    TKGHNSep 11, 2025· 1 reaction
    CivitAI

    I'm getting a lot of [lora key not loaded:] Why is that?

    hyperluminalSep 16, 2025

    because you might be using wan2.1 loras or loras that are I2V rather than T2V or vice versa, you can't just throw LORAs are the model and expect perfect output

    favas644Sep 18, 2025

    @smashedshanky That's right. Even though, most 2.1 loras work in 2.2 ( somewhat). Just ignore the errors.

    turkinoSep 12, 2025
    CivitAI

    Would love to see some color correction options in the next update. I did a 5 sec segment 20 second video that started really nice, but by the 3rd-4th segment the video had drifted to a more 2.5d illustration style instead of "realistic" style.

    vusyrvisehievhSep 16, 2025

    Color drift is huge problem, totally agree. I used another multipieces workflows with color correction that produced much better results in terms of color

    favas644Sep 17, 2025

    Try ImageColorMatch node from ComfyUI-Easy-Use. For I2V, connect "ref image" to your original image, connect VAE output to "target image", connect image output to save video node. Not sure what's the correct connection for T2V (if needed) since i never did it so far.

    syntecSep 13, 2025
    CivitAI

    Seems to be an issue with the Patch Sage Attention (PSA) nodes. Every setting now hits an error at the KSampler Advanced nodes - unexpected keyword argument "transfomer_options". If the PSA nodes are set to disabled, there's no error and the workflow functions as normal. Oddly, I don't see too much of a slowdown disabling them either.

    zSartanSep 13, 2025· 2 reactions

    Yeah, I have the same error, but without PSA the performance dropped by as much as ~30 sec! ~11:30 min -> ~12:00 min

    syntecSep 15, 2025

    @zSartan (and anyone else with the issue), there's an open PR. https://github.com/kijai/ComfyUI-KJNodes/pull/386
    Downloading the file (or manually correcting the line in the model optimisation node) seems to fix it for now.

    QuodCausisSep 17, 2025
    CivitAI

    Mate, can you please add blockswap nodes to these? So we can actually make longer and higher resolution videos without running out of memory?

    hyperluminalSep 28, 2025

    m8 you can do that yourself, its quite easy.... why not even add a florence LLM prompt optimizer as well while you are at that

    favas644Sep 18, 2025· 1 reaction
    CivitAI

    EDIT: finally found a workaround: create conda virtual environment with python 3.12, install and run comfy within that.

    It doesn't work on Linux because Mediapipe won't install. Turns out to be due to incompatibility with embedded Python >3.13. I've been researching for a solution, but it seems there is no workaround for Linux (There seem to be a trick to downgrade Python in the windows portable version only).

    favas644Sep 18, 2025
    CivitAI

    EDIT: Turns out to be an issue about KJnodes, it works with native GGUF loader. Maybe a downgrade of KJnodes solves, haven't tried yet.

    Doesn't work anymore after ComfyUi/custom nodes update: Return type mismatch between linked nodes: model, received_type(WANVIDEOMODEL) mismatch input_type(MODEL)

    Sorry to say, it'd be a great job, but too unstable at the moment...

    LxmSep 20, 2025
    CivitAI

    When running the workflow the next day, it cannot continue to be used with an error message:

    Prompt outputs failed validation:

    String to Float:

    - Required input is missing: String

    String to Float:

    - Required input is missing: String

    Does anyone have a solution for this error in versions 1.31.1.21.1.1?

    vellaneSep 24, 2025

    same here: Prompt outputs failed validation: String to Float: - Required input is missing: String String to Float: - Required input is missing: String

    lion78899Sep 28, 2025

    Same problem, anyone found a fix for that? Thanks in advance.

    lion78899Sep 28, 2025

    connecting the node "convert FPS to float" from string to String fixed it

    LxmSep 28, 2025

    @lion78899 Do you have any pictures? I want to see where to connect to

    lion78899Sep 28, 2025

    @Lxm on the node itself just pull it one up on the left from "string" to the "String" above

    LxmSep 28, 2025

    @lion78899 Thank you, this issue has been resolved

    LothariousSep 24, 2025
    CivitAI

    Best workflow I've come across! Amazing!

    I am trying to load some LORAs into the Low & High Lora nodes, however when I attempt to run the workflow I get this error:

    Power Lora Loader (rgthree)

    RgthreePowerLoraLoader.load_loras() missing 1 required positional argument: 'clip'

    I notice the clip input and output of these LORA nodes aren't connected to anything, is this normal? How do I fix this error?

    Thanks!

    lotu5Sep 25, 2025
    CivitAI

    Amazing workflow!

    I'm working around the KJNodes bug, but my solution manually connecting stuff might be lowering quality.

    Ive tried everything to resolved the KJNodes except reverting everything to Aug versions, as I need it for other workflows. Hopefully you update your nodes and workflow!

    hyperluminalSep 28, 2025

    what do you mean?? it works with the latest kj node git pull........

    GrimmsterOct 15, 2025

    @smashedshanky I'm having the issue of jd nodes get/set saying missing nodes and I have the latest (10/7) installed through ComfyUI, and there is an open bug about this on the main page of this workflow here.

    NukhemSep 29, 2025· 1 reaction
    CivitAI

    Great, this workflow messed up two of my most used node packs... unable to reinstall them

    hyperluminalOct 9, 2025

    sounds like a you problem bud

    CitrusBOct 5, 2025
    CivitAI

    Amazing Workflow! Managed to generate a 2 second video in 2 minutes. For those who are having issues with their video being too high contrast, try editing NAG Attention value. I just copied the default value from Wan2GP and it worked great. Edit both High/Low:

    nag_scale: 1.0

    nag_alpha: 0.5

    nag_tau: 3.5

    jazzyreynard285Jan 10, 2026

    I think you just disabled NAG, because it needs to be a value higher than "1" to work with CFG 1.

    kentskookingOct 6, 2025
    CivitAI

    Many thanks, this is a stellar workflow. Love the way it's organized too. 5 stars!

    GrimmsterOct 15, 2025
    CivitAI

    I'm unable to generate any video with more than one segment (T2V 131). I am getting the missing kjnodes error when I open the workflow. I have the latest kjnodes and Dax nodes.

    robomaraiartOct 29, 2025
    CivitAI

    Command '['C:\\ComfyUI_windows_portable\\python_embeded\\Lib\\site-packages\\triton\\runtime\\tcc\\tcc.exe', 'C:\\Users\\rsgra\\AppData\\Local\\Temp\\tmpbs7oo_18\\cuda_utils.c', '-O3', '-shared', '-Wno-psabi', '-o', 'C:\\Users\\rsgra\\AppData\\Local\\Temp\\tmpbs7oo_18\\cuda_utils.cp312-win_amd64.pyd', '-fPIC', '-D_Py_USE_GCC_BUILTIN_ATOMICS', '-lcuda', '-lpython312', '-LC:\\ComfyUI_windows_portable\\python_embeded\\Lib\\site-packages\\triton\\backends\\nvidia\\lib', '-LC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\lib\\x64', '-IC:\\ComfyUI_windows_portable\\python_embeded\\Lib\\site-packages\\triton\\backends\\nvidia\\include', '-IC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.8\\include', '-IC:\\Users\\rsgra\\AppData\\Local\\Temp\\tmpbs7oo_18', '-IC:\\ComfyUI_windows_portable\\python_embeded\\Include']' returned non-zero exit status 1.

    i got this, how i can switch of triton completely

    vampirox22Oct 29, 2025
    CivitAI

    Thanks for his amazing workflows, I'm getting pretty good results whit them

    infinaeonNov 1, 2025
    CivitAI

    I just wanted to say this is literally the best workflow I have used. (I2V GGUF) Thank You!

    joyy114Nov 3, 2025
    CivitAI

    If you encounter the following error message when using the I2V workflow in ComfyUI:

    KSamplerAdvanced
    Given groups=1, weight of size [5120, 36, 1, 2, 2], expected input[1, 32, 21, 160, 90] to have 36 channels, but got 32 channels instead

    This issue is highly likely related to a compatibility problem with the flow2-wan-video custom node.

    Solution: Deleting this custom node resolved the issue, and the I2V workflow (v1.2.1 and potentially others) started running correctly.

    This was based on my recent experience after a fresh ComfyUI installation, where a previously working I2V workflow failed. Others on Reddit reported the same fix. I hope this helps anyone else facing this persistent error!

    GuyFreelyNov 3, 2025
    CivitAI

    Maybe I missed something obvious, but is there an easy way to save the last frame instead of the first frame when doing a single I2V generation?

    XdivineNov 28, 2025· 1 reaction

    You can use the "pick from batch" node from the mtb pack. It allows you to choose X number of images from either the front or back of the image set. So you could just do 1 frame from the end.

    spammyspamspamNov 4, 2025
    CivitAI

    The absolute GOAT, thank you for your work!

    slarkfinley152Nov 17, 2025
    CivitAI

    I tried the 1.3 version and it seems it combines the video from the previous generation.

    mezonih75Nov 19, 2025
    CivitAI

    can anyone explain what the Video Execution ID is and how to make sequences?

    2950678336431Nov 21, 2025
    CivitAI

    Regarding the Triton library, my current Python version is 3.12. If I don't want to downgrade to 3.11, are there any other ways to make the workflow work normally?

    stewi0001Nov 27, 2025
    CivitAI

    I am having a rough time. This workflow is an awesome work in progress. It worked for me the first time after turning off the sageattention with great results. I have a Blackwell video card, 5070 Ti. After that, any time I try to run it, I keep getting this error:

    # ComfyUI Error Report ## Error Details - **Node ID:** 167 - **Node Type:** ImageUpscaleWithModel - **Exception Type:** RuntimeError - **Exception Message:** CUDA error: invalid argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

    I am still a noob at video workflows... Keep up the great work, wish I could be more helpful.

    marcyyyyDec 22, 2025

    how do i turn off sage attention? im trying to run it and it says i need sage attention and i dont want go through another 2 hour tutorial to install it

    stewi0001Dec 22, 2025

    @marcyyyy There are some additional nodes, hidden above the main workflow. The 2 groups on the left are "Process WAN 2.2 I2V High" and "Process WAN 2.2 I2V Low". Both of them have a node "Patch Sage Attention (High/Low)". It should be the third node from the top. Switch both of them to disabled.

    marcyyyyDec 22, 2025· 1 reaction

    @stewi0001 damm i didnt even see those

    tamilboy1Dec 2, 2025
    CivitAI

    Great workflow! just had to fix the ffmpeg but overall works great thanks for the effort.

    MarcanOlssonDec 8, 2025
    CivitAI

    Any way to get the v1.3.1 version to stop generating videos? I got it on 4 segment_length and total_length 12 but it's giving me six 3 seconds videos.

    marcyyyyDec 23, 2025
    CivitAI

    So I got it working but I have some problems. I had sageattention issues but then I was told that there were more nodes above and i disabled it and fixed the problem, then I had GPU issues which I fixed by going to the run_nvidia_gpu.bat and added that script that grok gave me. So i finally was able to get pass all that and finally generate a video. But its noised and deep fried and blurry, but I can see that the animation behind the mess is clean and really good.

    It feels like im 80% there and I just need to tweak some settings but I can't keep trial and erroring it, Im hoping one of you guys know.

    I'm using SmoothmixWan22 for both high and low,

    Loadlightx2v is the default the workflow told me to use, everything else is default.

    I disabled upscaling and interpolation green nodes.

    Generation parameters are the default,

    resolution set to low 480x854 to test

    Ksampler is default but I disabled add noise for all 3.

    I have no idea what the nodes above the workflow do aside from disabling sage attention so I'm not gunna touch it.

    Any ideas?

    marcyyyyDec 23, 2025

    I think its because my strengths of some parameter is too strong, so im turning any 1 i see to .8 and see if it works. I suspect it could be loadlightx2v high and low, but it takes like 20 minutes per video so its a waiting game per trial and error. But as long as I don't get an error, I'm fine with however long it takes because I can just run it overnight

    edau102Jan 3, 2026· 1 reaction

    @marcyyyy You need to remember SmoothmixWan22 already has various models and LoRa's baked in. One of them is a speed LoRa. That is why you don't use a speed LoRa with SmoothMix. Start with that.

    SimarglVJan 8, 2026
    CivitAI

    The workflow crashes with an error.
    comfyui v.0.8.2
    Everything worked before

    !!! Exception during processing !!! module 'mediapipe' has no attribute 'solutions' 2026-01-09T00:02:28.470575 - Traceback (most recent call last): File "H:\SataSSDMini\StableMatrix\Packages\VideoAudioGen\execution.py", line 475, in execute obj = class_def() ^^^^^^^^^^^ File "H:\SataSSDMini\StableMatrix\Packages\VideoAudioGen\custom_nodes\DaxNodes\nodes\video\face_frame_detector.py", line 77, in __init__ self.mp_face_detection = mp.solutions.face_detection ^^^^^^^^^^^^ AttributeError: module 'mediapipe' has no attribute 'solutions' 2026-01-09T00:02:28.472575 - Prompt executed in 156.14 seconds ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here)


    SimarglVJan 8, 2026

    "Fixed" by downgrading mediapipe to 0.10.21

    twinweeknd686Jan 9, 2026
    CivitAI

    best wan 2.2 workflows out there , would love to also get a T2I workflow cause yours are the best

    darkwaterramenJan 13, 2026
    CivitAI

    For me, I keep getting this error for FPS node in the backend section. I tried fixing it but no luck.

    String to Float: - Required input is missing: String String to Float: - Required input is missing: String

    rumbleskinFeb 4, 2026

    I just downloaded this WF today. Same error. FIX: In my current version of the nodes, the problem was that the nodes "Convert FPS to Float" and "Convert INT FPS to Float" had two String inputs, and the workflow connected to the 2nd String input. Reconnect the lines to the first String input fixes the problem.

    darkwaterramenFeb 5, 2026

    oh i got it working lol! thanks

    metalskin838Jan 29, 2026
    CivitAI

    I keep getting a " lora key not loaded " I cant find the missmatch, anyone have any ideas? im running the gguf ver of the WF.

    Alter_Ego_EchelonFeb 8, 2026
    CivitAI

    I am getting an error connected to the ResolutionPickerFLF2V, any ideas how to fix it?

    clashroyalecoolguyFeb 23, 2026

    Found a way to fix it? I have the same problem.

    Alter_Ego_EchelonFeb 23, 2026

    @clashroyalecoolguy nope

    AbyssalDreamsMar 28, 2026

    same problem,

    whitegoblin420Mar 9, 2026· 3 reactions
    CivitAI

    Just wanted to say thank you so much for sharing your workflows with us.

    These are hands down the best I've ever had the pleasure of working with, absolutely the standard moving forward.

    I see a lot of comments from people struggling with the install, it took me a bit of time to get v1.3.1 up and running but can report it's alive and well here in March 2026.

    Anyone having troubles with the setup just breath and remember to take your time, go slowly.

    There is nothing wrong with these releases, they work flawlessly, and are well worth it to get right. Do not give up!

    Thanks again for all your hard work and time Daxamur. Can't wait to see your next release.

    /cheers

    WrackerRioterApr 24, 2026
    CivitAI

    ltx 2.3?

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    4,648
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/30/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    daxamursWAN22WorkflowsV121FLF2VT2V_flf2vV121.zip