CivArchive

    🧩 Wan2.1-FusionX — Official ComfyUI Workflows- WIP

    📢 7/1/2025 Update!

    New: FusionX Lightning Workflows

    Looking for faster video generations with WAN2.1? Check out the new FusionX_Lightning_Workflows — optimized with LightX LoRA to render videos in as little as 70 seconds (4 steps, 1024x576)!

    🧩 Available in:
    • Native • Native GGUF • Wrapper
    (VACE & Phantom coming soon)

    🎞️ Image-to-Video just got a major upgrade!!!!!!
    Better prompt adherence, more motion, and smoother dynamics.

    ⚖️ FusionX vs Lightning?
    Original = max realism.
    Lightning = speed + low VRAM, with similar quality using smart prompts.

    👉 Check it out here


    ☕ Like what I do? Support me here: Buy Me A Coffee 💜
    Every coffee helps fuel more free LoRAs & workflows!


    📢 Did you know you can now use FusionX as a LoRA instead of a full base model?
    Perfect if you want more control while sticking with your own WAN2.1 + SkyReels setup.

    🔗 Grab the FusionX LoRAs HERE
    🔗 Or Check out the Lightning Workflows HERE for a huge speed boost.


    This is the official workflow hub for the Wan2.1_14B_FusionX models that can be found HERE

    ⚠️ NOTE: The workflow's are embedded inside the png file. Just drag it into ComfyUI and it will load up.

    ⚠️ NOTE: Each workflow has detailed notes and links to the correct models. Please read the notes carefully before using.

    And right here, you’ll find a full set of workflows designed to unlock the model’s potential across a range of generation types, including:

    • 🎬 Text-to-Video (T2V) – Available now. Just drag and drop the PNG file into ComfyUI. (I’ve included a sample video created with the current settings in the folder.)

      🖼️ Image-to-Video (I2V) – Available now. Drag and drop png into comfyUI. I Included the start frame from the example video if you want to test it. (Please note: The Wrapper version supports start AND end frame. Native only supports start frame.)

      🎬 VACE Wrapper and VACE gguf Native - Use a control video and/or a ref image with your text prompt to have more control of your output video.

      🖼️ VACE non-gguf Native - Coming soon

      🎬 Phantom Wrapper - Mix up to 5 images into a video for full character and scene control.

      🖼️Phantom Native and gguf - Coming soon.

    ⚠️ NOTE: for image to video, to get up to a 50% increate in overall motion in the video, set your frame count to 121 and FPS to 24!!! After some testing this really helps!

    ⚠️ NOTE: please read the note boxes in the workflows because there are important details that will help you overcome some error's you may encounter.

    Each category will include both:

    • Native Workflows – Built directly with WAN components for full control and customization.

    • 🚀 Wrapper Workflows – Uses the Kijai Wrapper for optimized generation speed.

    These are the same workflows used in all demo videos on the model's main page — no extra LoRAs, upscaling, or interpolation. Just clean, raw model outputs with the right settings.


    ⚠️ All required components (e.g., CausVid, AccVideo, MPS LoRAs) are already baked into the model. Do not re-add them unless you know what you're doing.

    Whether you're looking to create cinematic text-to-video scenes, stylized image-driven sequences, or combine multiple references into a single shot — these workflows are your starting point.


    ## 📢 Join The Community!

    We're building a friendly space to chat, share creations, and get support. I am also adding a channel to include some good motion lora's to help to get more motion in your i2v video's and I'll be adding other goodies here so please join us :)

    👉 Click here to join the Discord!

    Come say hi in #welcome, check out the rules, and show off your creations! 🎨🧠

    Description

    VACE_gguf_FusionX_WF

    FAQ

    Comments (87)

    speckrod212Jun 13, 2025
    CivitAI

    Getting the "cannot access local variable 'model_variant' where it is not associated with a value" when trying to execute I2V workflow with all the correct models installed. Any ideas?

    vrgamedevgirl
    Author
    Jun 13, 2025

    Can you join the discord server so we can a screen shot of your set up? discord server link is in the description. I'm in there all day and can assist you there.

    bhoppingJun 13, 2025· 1 reaction
    CivitAI

    Is there no I2V wan2.1 fusionx gguf models?

    vrgamedevgirl
    Author
    Jun 13, 2025· 1 reaction

    If you scroll all the way down in the description there are links to the GGUF models. There should be on in there.

    bhoppingJun 13, 2025

    @vrgamedevgirl ty

    tupuJun 13, 2025
    CivitAI

    Hi there, What is the minimum system required for the wrapper version? I can't get it to work. I have an RTX 3060 and 64GB VRAM. Should it work? The I2V workflow does. Thanks.

    vrgamedevgirl
    Author
    Jun 13, 2025· 1 reaction

    the 3060 has 8 or 12 gb of VRAM. If you have 12 it would work with block swapping and for sure with a gguf model. If you pop over to the discord server I can assist you better. Link in description.

    tupuJun 13, 2025

    @vrgamedevgirl 12gb

    tupuJun 13, 2025

    @vrgamedevgirl I'll join discord later on...thanks

    jonk999Jun 15, 2025

    I have 32GB system RAM and RTX3060 12GB and don't have any issues with the GGUF flows. I just updated the non-wrapper I2V flow to load a GGUF model instead of regular one. The T2V GGUF flow works without modification unless you want to add Faceswap or more Loras. I usually update flows to use the Power Lora Loader so can easily add multiple Loras.

    greentheoryJun 13, 2025
    CivitAI

    Really impressive! Is there a way to prompt for more realistic? It seems like it outputs perfect composition like everyone is a model.

    vrgamedevgirl
    Author
    Jun 13, 2025

    What workflow are you using? Can you join discord, i'm more active over there. Link in description!

    crombobularJun 14, 2025
    CivitAI

    fyi it tells the user to download the 30gb model when you open the i2v workflow.

    AlG80Jun 14, 2025
    CivitAI

    That's weird. I have these errors:

    1) WanVideoModelLoader:

    - Value not in list: base_precision: 'fp16_fast' not in ['fp32', 'bf16', 'fp16']

    2) WanVideoSampler:

    - Value not in list: scheduler: 'dpm++/beta' not in ['unipc', 'dpm++', 'dpm++_sde']

    3) If you select fp16 in the first item and select dpm++ in the second item, the process still stops: WanVideoImageToVideoEncode

    WanVideoVAE.encode() got an unexpected keyword argument 'end_'

    vrgamedevgirl
    Author
    Jun 14, 2025

    In order to assist you I'll need more info. Please join discord community and I can help you there where i'm active as much as I can be. the link is in the description.

    seductivelyai695Jun 17, 2025

    Do you have the latest WanVideoWrapper custom node?

    AlG80Jun 14, 2025
    CivitAI

    friends, show me your working print screen of workflow, please

    axicecJun 14, 2025

    OP was too lazy to care... thats ok they didnt do much work cause its all copy paste from other people

    vrgamedevgirl
    Author
    Jun 14, 2025· 7 reactions

    @axicec I’m sorry you feel that way, but I can assure you I didn’t copy anything from anyone. I’m not sure what you mean by “too lazy to care” — since posting this model, I’ve spent a lot of time helping others and answering questions. I don’t make any money from this, and I do it outside of my full-time job. Lately, I’ve only been getting about 6 hours of sleep a night just trying to support the community.

    @AIG80 What are you needing help with. I have a discord where I'm at all day to help everyone. Please see link in description.

    wqn999Jun 14, 2025· 2 reactions
    CivitAI

    Thank you, great one, for your tremendous contribution to the wan2.1 open-source community. All the authors of the open-source community are great. Their stability and output quality are astonishing. Thank you! This has saved me a lot of time. Through my own configuration (RTX 4060ti, 16GB memory + 64GB RAM), I found that when generating images at a resolution of 1024*576 and then performing a double magnification process, the speed increased to ten minutes! When I used the initial version of wan2.1 14b 720p i2v model, this time required approximately one hour. This improvement is huge and I am very grateful.

    vrgamedevgirl
    Author
    Jun 14, 2025

    Thank you so much for the kind words. Really glad to hear it’s saving you time and running well on your setup — that kind of feedback means a lot. Huge credit goes to everyone in the open-source community who made this possible. I'm continuing to work on improvements, so stay tuned for more updates soon. Appreciate you being part of this journey.

    wqn999Jun 14, 2025· 1 reaction

    @vrgamedevgirl It's a great honor for me to be part of this. The creators of the open-source community are all remarkable. Thank you all.

    ejsdJun 14, 2025

    With the increasing quality of tweeners and resolution upscaling (so upscaling of both time and resolution), your approach is a solid way to go. The recommendation of 121 frames at 24 fps for the video settings produces a lot of "compressed action content" that can be expanded with upscaling. I also like cranking fps to 48 and using 97, 121, or 145 frames. More frames may mean you need to reduce your resolution to fit in memory. Also try different schedulers, such as: flowmatch_causvid and and euler/accvid. The second one requires 50 steps, but timewise it works like 10 step. I have a 4090 so I have half again as much VRAM, a 5090 32GB or 6000 96GB should would be nice, but we have what we have.

    vrgamedevgirl
    Author
    Jun 14, 2025

    @ejsd Just to clarify. The 121 @24 was just for Phantom and Image to video. Phantom was trained on that and i2v seems to get better/faster motion buts its not required. text to video and VACE are fine at 81 @16 though.

    MeMakeStuffJun 15, 2025
    CivitAI

    I absolutely LOVE the workflow, it makes me able to start making some awesome little clips myself :)

    I'll probably have a 5090 next year around this time but for the time being I'm using an OC'd 5060-Ti/16GB on a AMD 8700G with 32GB of OC'd RAM (6400, tighter subtimings) - are there any tips to be had to cater more to the 5060-Ti setup?

    Using the Image2Video-FusionX one :)

    vrgamedevgirl
    Author
    Jun 15, 2025

    Did u try block swapping?

    MeMakeStuffJun 15, 2025

    @vrgamedevgirl Not yet, but it also seems to lock up comfy - so I'll report on that.

    I think it halted with the "[2/0_1] Not enough SMs to use max_autotune_gemm mode"

    vrgamedevgirl
    Author
    Jun 15, 2025

    @MeMakeStuff You should join the discord channel, I can assist you better over there. :) link in description.

    huwhitememesJun 15, 2025
    CivitAI

    Thank you so much for all of your hard work and sharing 😍🙏🏻🔥

    vrgamedevgirl
    Author
    Jun 16, 2025

    Your very welcome!!!

    seductivelyai695Jun 17, 2025
    CivitAI

    Nice.. great results

    fedupscribe687Jun 17, 2025
    CivitAI

    What's the best way to use multiple loras? I tried rgthree lora power loader but that doesn't connect to nodes in the i2v workload

    seductivelyai695Jun 17, 2025

    try the wan lora, but you have to chain the loras together.

    xiaoc876a131Jun 17, 2025
    CivitAI

    I use native gguf at a faster speed than kj node. Is this normal

    vrgamedevgirl
    Author
    Jun 17, 2025

    depends on a few things. But i have heard native can be faster than wrapper sometimes.

    RedditUser981Jun 19, 2025

    @vrgamedevgirl can you share your workflow and dont forget to add loras after sharing thanks you

    goldsteinmoshe403320Jun 17, 2025
    CivitAI

    Friend, can you write this in human language? " Un-bypass the second Load Image and Resize nodes 🔗 Connect the output from the second Resize node to: 📥 end_image on Image-to-Video Encode 📥 clip_vision on Clip Vision Encode" clip_vision on Clip Vision Encode they are already connected in your workflow!

    vrgamedevgirl
    Author
    Jun 17, 2025

    which workflow, I have made so many... If they are already connected then just right click the ones that are purple and un-bypass them. Since I can't share screen shots in the notes or on here I would say to join discord and someone can show you via a screen shot.

    @vrgamedevgirl could you send me a screenshot to mhttassadar dot gmail.com? I don't have them connected to anything at all(( and I don't understand how. Thanks!

    " Connect the output from the second Resize node to: " clip_vision on Clip Vision Encode these nodes cannot be connected((

    vrgamedevgirl
    Author
    Jun 17, 2025

    Please reach out me on discord. Server link is in description.

    flo11ok874Jun 17, 2025
    CivitAI

    After download i2v zip file there is broken file and can't unrar...

    vrgamedevgirl
    Author
    Jun 17, 2025· 1 reaction

    Its not a RAR file. Its just a zip that you extract. I would suggest delete the zip and redownload the file. Join the discord server for more assistance.

    flo11ok874Jun 19, 2025

    @vrgamedevgirl Ok, I just download video and drag to comfyui and its ok. But how about ultra simple workflow for i2v gguf same easy workflow like t2v gguf version. Its is possible? I'm noob.

    jtmichelsJun 18, 2025
    CivitAI

    The link you shared is broken (it points to an emoji url lol) -- thanks for the great work though.

    ShangTsungVibesJun 18, 2025
    CivitAI

    What gear did you use for this video in post? How much time does it takes?

    vrgamedevgirl
    Author
    Jun 18, 2025· 1 reaction

    I'm not sure what your asking? what is a Gear?? It takes about 120 seconds to create the video.

    ShangTsungVibesJun 19, 2025

    @vrgamedevgirl Sorry :D I'm asking about your PC - what video card, cpu, how much RAM?

    vrgamedevgirl
    Author
    Jun 19, 2025· 3 reactions

    @ShangTsungVibes rtx 5090 32gb vram. 128gb of ram. Folks with 12 gb are running it though just fine.

    ShangTsungVibesJun 19, 2025

    @vrgamedevgirl Thanks!

    RedditUser981Jun 19, 2025
    CivitAI

    kindly please share workflow with loras supported please fast version

    vrgamedevgirl
    Author
    Jun 19, 2025· 1 reaction

    I have shared many workflows including ones with loras. In not sure what your asking for

    nicolas1605villarreal304Jun 22, 2025
    CivitAI

    tengo problemas con el flujo de trabajo me sale estemensaje :Failed to find the following ComfyRegistry list. The cache may be outdated, or the nodes may have been removed from ComfyRegistry.

    neolsonporto111Jun 24, 2025· 1 reaction
    CivitAI

    Amazing work, so well done, organized and explained. Thank you a million for this effort. This is truly outstanding. Best from Rio.

    vrgamedevgirl
    Author
    Jul 12, 2025

    Your very welcome!

    randomchatter1234776Jun 25, 2025
    CivitAI

    error #1:

    Prompt execution failed

    Cannot execute because a node is missing the class_type property.: Node ID '#111'

    error#2:

    Missing Node Types

    When loading the graph, the following node types were not found

    ModelPatchTorchSettings

    - this node is nowhere to be found among the thousands of available nodes in Manager. So that's the end of the adventure for me.

    vrgamedevgirl
    Author
    Jun 25, 2025

    I think that's part of the kj nodes custom nodes pack...

    XIA_LuminatrixJun 30, 2025

    @vrgamedevgirl No I had the same problem and I have that custom nodes pack. It didn't work so I reused my old workflow.

    GT123Jun 27, 2025· 1 reaction
    CivitAI

    HOW TO USE GGUF version

    vrgamedevgirl
    Author
    Jun 27, 2025

    I need more context... You take the GGUF model and load it via the GGUF loader. Did try using the GGUF workflow?

    treefrogofdoomJun 28, 2025· 1 reaction
    CivitAI

    Thanks really excellent workflows!

    However, I did not see a workflow appropriate for the "FusionX_itv_gguf_XS_K_S" check point you provided. Does one exist?

    vrgamedevgirl
    Author
    Jul 12, 2025

    The native gguf workflow will work for this model

    12580cx467Jul 4, 2025
    CivitAI

    GGUF can be used together with teacache?

    vrgamedevgirl
    Author
    Jul 4, 2025

    U can't use teacache because the steps are already low. Teacache basically skips steps so it does not work.

    KOPCAPJul 5, 2025· 1 reaction
    CivitAI

    1. why is there no gguf i2v?

    2. why does gguf t2v text encoders have umt5_xxl_fp8_e4m3fn_scaled.safetensors? And not umt5-xxl-encoder-Q3_K_S.gguf It should logically work better.

    just i have 8 vram.

    vrgamedevgirl
    Author
    Jul 5, 2025· 1 reaction

    The link to the i2v GGUF"s is in the main model description at the end.
    https://huggingface.co/QuantStack/Wan2.1_I2V_14B_FusionX-GGUF/tree/main

    If you want to use the gguf text encoder please just replace the node and use the model of your choice.

    vAnN47Jul 17, 2025

    i may be blind because i can't find anywhere the gguf i2v workflow. went to main model description and couldn't find it at the end....

    vrgamedevgirl
    Author
    Jul 17, 2025

    vAnN47 the model is at the end. Please reach out to me on Discord @ vrgamedevgirl

    deeplearning13Jul 22, 2025

    vrgamedevgirl Same problem here - the model is there but the WORKFLOW for I2V gguf isn't...

    npc849Jul 6, 2025
    CivitAI

    Great workflow! I'm wondering how to generate videos longer than 5 seconds while keeping the character appearance consistent throughout the video. Any tips or modifications to the current setup?

    vrgamedevgirl
    Author
    Jul 6, 2025· 1 reaction

    Please reach out on discord to discuss link in description

    hdeanJul 12, 2025· 2 reactions
    CivitAI

    This creator is amazing. I've got a 16GB GPU and my generations take quite a while. Nonetheless, this workflow and the others she created are amazing. So far I have used her i2v and t2v workflows and they work marvelously. And prompt adherence is stellar. To make sure my descriptors work I just drop the frame count, run a quick generation, tweak until I get it right, then bring the frame count back to normal and add in the motion. I cannot say enough good things about the workflows. She is also very helpful in her discord.

    vrgamedevgirl
    Author
    Jul 12, 2025

    Thank's for the kind words ❤️

    hdeanJul 13, 2025· 1 reaction

    @vrgamedevgirl  How could I not say something? Really, you have my sincerest gratitude.

    hoyleontour588Jul 13, 2025· 1 reaction
    CivitAI

    Brilliant mate! No messing around. Work first time and superb quality. Kike the clear instruction boxes too.

    hestiaJul 14, 2025· 5 reactions
    CivitAI

    Where to download the workflows please? The zip only contains example images and videos

    vrgamedevgirl
    Author
    Jul 14, 2025· 2 reactions

    Drag the png of the workflow into comfyui. It's embedded

    hestiaJul 14, 2025· 1 reaction

    @vrgamedevgirl Thanks

    adinapunyoJul 15, 2025
    CivitAI

    I get an error on the Ksampler:

    wan_i2v_crossattn_forward_nag() missing 1 required positional argument: 'context_img_len'

    What does it mean?

    vrgamedevgirl
    Author
    Jul 15, 2025

    What workflow are u using and with what model

    LuntrixJul 24, 2025
    CivitAI

    Holy, its blazing fast and results are amazing. I could not run normal 14b without memory issues, so this is huge.

    VanpourixJul 25, 2025
    CivitAI

    Am i the only one who get this error importing the workflow in the latest comfyUi portable version ? Someone knows how to solve this ?
    TypeError: can't redefine non-configurable property "value"

    vrgamedevgirl
    Author
    Jul 25, 2025

    This is the first time I have seen this. I just updated comfyUI a few days ago. This happens when you drag the PNG into comfyUI?

    katkesujit2540Jul 26, 2025
    CivitAI

    Missing Models

    When loading the graph, the following models were not found

    Don't show this again

    diffusion_models / wan2.1_t2v_1.3B_fp16.safetensors


    why this is showing when i drag and drop that png and also getting error like reconnecting and stuck but yesterday i am using with this error's and in my terminal shows press any key to continue and i press enter but terminal closing

    vrgamedevgirl
    Author
    Jul 26, 2025

    you need to choose your own models form your PC in all the model loaders. Just click the drop down and pick yours.

    engineX2Oct 14, 2025· 1 reaction
    CivitAI

    What is the Lora feature of FusionX? It's so complicated that I can't understand it even after reading it.

    mlringorMar 5, 2026
    CivitAI

    i can run , but it's not showing my lora's face. it's just the japanese woman face with katana. I made the Load lora node "normal" and not bypass. My lora name also loaded. anything else I'm missing?

    Workflows
    Wan Video 14B t2v

    Details

    Downloads
    2,069
    Platform
    CivitAI
    Platform Status
    Available
    Created
    6/12/2025
    Updated
    5/13/2026
    Deleted
    -