CivArchive

    🧩 Wan2.1-FusionX — Official ComfyUI Workflows- WIP

    📢 7/1/2025 Update!

    New: FusionX Lightning Workflows

    Looking for faster video generations with WAN2.1? Check out the new FusionX_Lightning_Workflows — optimized with LightX LoRA to render videos in as little as 70 seconds (4 steps, 1024x576)!

    🧩 Available in:
    • Native • Native GGUF • Wrapper
    (VACE & Phantom coming soon)

    🎞️ Image-to-Video just got a major upgrade!!!!!!
    Better prompt adherence, more motion, and smoother dynamics.

    ⚖️ FusionX vs Lightning?
    Original = max realism.
    Lightning = speed + low VRAM, with similar quality using smart prompts.

    👉 Check it out here


    ☕ Like what I do? Support me here: Buy Me A Coffee 💜
    Every coffee helps fuel more free LoRAs & workflows!


    📢 Did you know you can now use FusionX as a LoRA instead of a full base model?
    Perfect if you want more control while sticking with your own WAN2.1 + SkyReels setup.

    🔗 Grab the FusionX LoRAs HERE
    🔗 Or Check out the Lightning Workflows HERE for a huge speed boost.


    This is the official workflow hub for the Wan2.1_14B_FusionX models that can be found HERE

    ⚠️ NOTE: The workflow's are embedded inside the png file. Just drag it into ComfyUI and it will load up.

    ⚠️ NOTE: Each workflow has detailed notes and links to the correct models. Please read the notes carefully before using.

    And right here, you’ll find a full set of workflows designed to unlock the model’s potential across a range of generation types, including:

    • 🎬 Text-to-Video (T2V) – Available now. Just drag and drop the PNG file into ComfyUI. (I’ve included a sample video created with the current settings in the folder.)

      🖼️ Image-to-Video (I2V) – Available now. Drag and drop png into comfyUI. I Included the start frame from the example video if you want to test it. (Please note: The Wrapper version supports start AND end frame. Native only supports start frame.)

      🎬 VACE Wrapper and VACE gguf Native - Use a control video and/or a ref image with your text prompt to have more control of your output video.

      🖼️ VACE non-gguf Native - Coming soon

      🎬 Phantom Wrapper - Mix up to 5 images into a video for full character and scene control.

      🖼️Phantom Native and gguf - Coming soon.

    ⚠️ NOTE: for image to video, to get up to a 50% increate in overall motion in the video, set your frame count to 121 and FPS to 24!!! After some testing this really helps!

    ⚠️ NOTE: please read the note boxes in the workflows because there are important details that will help you overcome some error's you may encounter.

    Each category will include both:

    • Native Workflows – Built directly with WAN components for full control and customization.

    • 🚀 Wrapper Workflows – Uses the Kijai Wrapper for optimized generation speed.

    These are the same workflows used in all demo videos on the model's main page — no extra LoRAs, upscaling, or interpolation. Just clean, raw model outputs with the right settings.


    ⚠️ All required components (e.g., CausVid, AccVideo, MPS LoRAs) are already baked into the model. Do not re-add them unless you know what you're doing.

    Whether you're looking to create cinematic text-to-video scenes, stylized image-driven sequences, or combine multiple references into a single shot — these workflows are your starting point.


    ## 📢 Join The Community!

    We're building a friendly space to chat, share creations, and get support. I am also adding a channel to include some good motion lora's to help to get more motion in your i2v video's and I'll be adding other goodies here so please join us :)

    👉 Click here to join the Discord!

    Come say hi in #welcome, check out the rules, and show off your creations! 🎨🧠

    Description

    Image to Video

    FAQ

    Comments (27)

    crazybabyJun 9, 2025
    CivitAI

    Thank you so much for your work, it's really great! Thank you for everything, I have used this workflow of yours: https://cdn.discordapp.com/attachments/1005562816532586496/1381028082780999750/FusionX_PhantomWF.png?ex=68460621&is=6844b4a1&hm=eb9798ac1c1326a77309625c870a56f4406c1113e6bfd467178954eee01c8e49

    I used the FusionXphantomV1 model and it's amazing, it's really great! I'm still testing it and I'll upload a sample video I made with your model later, I can't thank you enough, thank you again for everything!

    crazybabyJun 9, 2025· 2 reactions

    I'm sorry, my English is not good, there may be some errors in the translation, but please don't misunderstand, I sincerely appreciate your work

    vrgamedevgirl
    Author
    Jun 9, 2025

    Your English is perfect :). And your very welcome!!

    jonk999Jun 9, 2025· 1 reaction
    CivitAI

    Looking forward to further workflows being uploaded. Will you also include ones for your GGUF model - or include some tips on how to modify for those models for those not super familiar with Comfy (like myself)?

    vrgamedevgirl
    Author
    Jun 9, 2025· 3 reactions

    I'll be posting workflows for everything soon. It's just me so will take some time. :) Stay tuned.

    everylightJun 10, 2025

    @vrgamedevgirl Yeah, the GGUF workflows would be great to have! Thank you very much for doing this!

    FLOW0308Jun 9, 2025· 2 reactions
    CivitAI

    Oh my god, a 3090 rendering 81 frames of high-quality video in 230 seconds? You've completely revolutionized the entire WAN ecosystem!

    vrgamedevgirl
    Author
    Jun 9, 2025

    :)

    banaj66727Jun 9, 2025

    How are you getting such fast speeds? I have a 3090 too and it takes 30+ minutes. I must be using my CPU or something

    FLOW0308Jun 9, 2025· 1 reaction

    @banaj66727 I set the image resolution for my video generation to 480x640.

    vrgamedevgirl
    Author
    Jun 9, 2025· 1 reaction

    @banaj66727 what res are you using? Are you using block swapping? Send over your settings and we can help you :)

    banaj66727Jun 9, 2025

    @vrgamedevgirl 576x1024 resolution, the input image is 800x1150, Im not sure if it gets cropped down, and a length of 81 with 10 steps. If FLOW0308 is using 480x640 that would probably make sense then

    FLOW0308Jun 9, 2025· 1 reaction

    @banaj66727 For me, it's much faster to first create a good original video, then upscale and interpolate frames to 960x1280. The quality difference isn't significant either. Of course, this is due to my computer's limitations. If I had an RTX 5090, I'd definitely go straight for a 720p native resolution.

    banaj66727Jun 9, 2025

    @FLOW0308 That's great info, thank you FLOW :)

    FLOW0308Jun 9, 2025· 1 reaction

    @banaj66727 If you're open to 18+ videos, you can check out the videos in my file section. They are all original 480x640 files that I've upscaled using my workflow. Feel free to take a look, provided you don't mind 18+ content.

    banaj66727Jun 10, 2025

    @FLOW0308 I've found a good compromise, I didn't like the detail loss with 480x640, for me doing 512x768, with a length of 81, Tea Cache with start percent 0.20 and 20 block swaps at 10 steps takes ~500 seconds. This gives a good quality output without taking the ~30 min it took before

    vrgamedevgirl
    Author
    Jun 10, 2025

    @banaj66727 Awesome!! Glad you got teacache working. Normally with such low steps it doesn't work great. NIce work!! :)

    jpXerxesJun 9, 2025
    CivitAI

    i2v working perfectly here! Major kudos.

    vrgamedevgirl
    Author
    Jun 9, 2025

    Yay! So glad its working for you! :)

    vrgamedevgirl
    Author
    Jun 9, 2025

    would love to see some of your work :)

    river176Jun 9, 2025
    CivitAI

    in the I2V workflows, the wrapper workflow has the t2v diffusion model download link in the text section. just an FYI

    vrgamedevgirl
    Author
    Jun 9, 2025· 1 reaction

    I'll fix that soon, thank u!

    vrgamedevgirl
    Author
    Jun 9, 2025· 1 reaction

    I just took a look and it does link to i2v, the text in the note just says t2v. I forgot to change it. But it still does bring you to the right one :)

    FLOW0308Jun 9, 2025· 3 reactions
    CivitAI

    NOTE: for image to video, to get up to a 50% increate in overall motion in the video, set your frame count to 121 and FPS to 24!!! After some testing this really helps!

    This seems to be usable only without adding LoRA, likely because many LoRA models are trained on 16 frames. Adding LoRA can cause issues, either making the speed extremely fast or extremely slow.

    vrgamedevgirl
    Author
    Jun 9, 2025· 2 reactions

    This may not be the case though because there are two lora's I created baked into the model and that did not have any effect.

    xpnrtJun 10, 2025

    this only works for kijai's node though , yes ? since we don't have a way to set the fps before generation with the official comfyui nodes. if there is a way to do that with them please tell.

    vrgamedevgirl
    Author
    Jun 10, 2025

    @xpnrt If you use the either of the WF's I shared, native OR wrapper, you can change the frame count and frame rate. Click here to join the Discord!

    we can help you in there! There is a support channel.

    Workflows
    Wan Video 14B i2v 720p

    Details

    Downloads
    9,984
    Platform
    CivitAI
    Platform Status
    Available
    Created
    6/9/2025
    Updated
    5/13/2026
    Deleted
    -