📢 7/1/2025 Update!
New: FusionX Lightning Workflows
Looking for faster video generations with WAN2.1? Check out the new FusionX_Lightning_Workflows — optimized with LightX LoRA to render videos in as little as 70 seconds (4 steps, 1024x576)!
🧩 Available in:
• Native • Native GGUF • Wrapper
(VACE & Phantom coming soon)
🎞️ Image-to-Video just got a major upgrade!!!!!!
Better prompt adherence, more motion, and smoother dynamics.
⚖️ FusionX vs Lightning?
Original = max realism.
Lightning = speed + low VRAM, with similar quality using smart prompts.
☕ Like what I do? Support me here: Buy Me A Coffee 💜
Every coffee helps fuel more free LoRAs & workflows!
📢 Did you know you can now use FusionX as a LoRA instead of a full base model?
Perfect if you want more control while sticking with your own WAN2.1 + SkyReels setup.
🔗 Grab the FusionX LoRAs HERE
🔗 Or Check out the Lightning Workflows HERE for a huge speed boost.
🍳 FusionX Ingredients – Mix Your Own Magic! 🧪✨
Note: Do not use the FusionX model with these workflows. Use the base WAN 2.1 model instead. FusionX already has the lora's build into it so re-stacking them will result in very poor overcooked output. These workflows were made so you can adjust the lora (Fusion) settings as you see fit. - If you REALLY want to use the fusionX model, remove ALL the lora's in the WF first.
We just dropped FusionX Ingredients Workflows, fully editable workflows that lets you take the wheel. All the LoRas are exposed like spices on a shelf – dial up, tone down, or bypass them to cook up your own custom "FusionX" blend. 🎛️🔥
✨Video walkthroughs have been created to help you understand how they work.
🍳 Image to Video walkthrough can be found HERE
🍳 Text to Video walkthrough can be found HERE
These workflows are a playground for creators who want to fine-tune their style.
🎯 Why Use FusionX Ingredients workflows?
Total control over LoRa strength and combinations
Quickly test different combos
Craft your own aesthetic
Works with WAN 2.1 14B, and SkyReels
🧠 What’s Inside?
Every LoRa that made FusionX great — now fully adjustable:
CausVid – Better scene flow + dramatic speed boot
AccVideo – realism + speed boost
MoviiGen1.1 – Cinematic details and lighting (T2V only)
MPS Reward – sharp motion dynamics and quality
Texture/Clarity LoRas – subtle, clean detailing
⚡ How to Use It:
Load a workflow
Toggle LoRa's on/off
Adjust strengths and run 💥
Go subtle. Go full chaos. Make it yours.
🧪 Workflows included:
Text to Video (Wrapper and native, gguf coming soon)
Image to Video (wrapper and Native + gguf)
VACE (coming soon)
Phantom (coming soon)
📹 Pro Tip:
Try batching 2-3 versions with diff LoRa mixes — you’ll find gems you didn’t expect.
⚠️ Reminder:
These are research-grade components. Experiment smart, follow the usual safety and legal notes. Don’t publish or profit without doing your homework. 😉
Description
FAQ
Comments (78)
Thanks so much for this, sometimes I just want max quality and this let's that happen.
Your VERY welcome!!! I just want to share what's out there with the community. Seems like not many people really new about these LoRa's! Hidden gems!
Thank you for this. A quick note: Your T2V workflow has the I2V AccVid lora.
I'll fix it, tried putting this together quick so probably made a mistake.
this has been fixed!
This is very nice, thank you!
Last night I was trying to set up something similar to FusionX , but using the new LightDistilled Lora instead of CausVid. However I didn't know exactly which values you were using for each lora, and I missed that detail lora somehow..
Also I really like the way you lay out your workflows. It's concise, clear and you don't block all the noodles. So I can tell whats going where without having to pull it all apart.
I tested that new lora you mentioned and I couldn't get good results for some reason
@vrgamedevgirl I've found the prompt adherence to be quite nice, same with animation. But you have to be incredibly specific.
For example, in my current test I typed "Holds up her phone". And she held up a phone in each hand. So I changed it to "Holds up her phone with her right hand" and it worked fine!
It was giving a bit more of a "CGI" look than I wanted, but your Lora mix helped patch that up!
now we are talking 💗
Thanks for finally putting this together, I had the hardest time locating the moviigen lora until now.
Thank you so much! You are an outstanding person and deserve respect! 👍👍👍👍👍
Incredible. You are single handedly revolutionizing the industry!
Is it worth installing Triton on Windows? I only read that it's very difficult and can mess up your ComfyU. Thanks for your work!
It is, u could create a backup first
use chatgpt or gemini to lead you through the process. didnt take me long doing this. but youll have to copy paste a lot of stuff
Getting that thing Installed is an absolute Nightmare on Elm Street, after a week of trying everything, every how to, every Tipp and Tool that i could find on the internet i gave up, until i found "Pinokio" (Google Pinokio ComfyUI) i used that to Install ComfyUI and it Installed it together with Triton and all of the other neccesary stuff, ready to use. So if you want to avoid that same Nightmare, just use Pinokio and enjoy.
@Highlandrise use ai to do anything!
you could use ubuntu with conda environments. ask chat gpt
@Highlandrise @Vyxen808 There is simple way. Just creat wan_autoinstall.bat file with correct text. Copy this file do comfyUI folder, click and wait for job done. Google file name and you will get reddit topic with rentry link with everything. Ps. It is worth it, 50% faster generation without lost any quality.
This installs it and it's completely automatic: https://civitai.com/models/1309415/comfyui-auto-installer-wan21-or-sageattention-or-50xx-cards-compatible-or
@garysebastianbrowniii like i said in my comment, i literally tried everything during an entire week with no success, and yes i used ai to help me (chatgpt) in the end, installing comfy inside pinokio was the only thing that worked and I'm happy, perfectly set up now
@flo11ok874 tried that and a million other things, but no success, getting triton to work in Windows is a serious challenge, and after a week i had enough of that, installed comfyui inside pinokio and it installed triton for me, works perfectly, don't see a reason why i should deal with all that other crap if it can be so easy with zero hassle
@skyrimer3d tried that one aswell, it managed to do everything until the point where it tried to install Triton itself and than it failed, just like the countless other methods i tried, than i used Pinokio to install ComfyUI and it installed triton and all the other important stuff, all by itself, can't get easier than that.
@Highlandrise Sorry for that, it worked fine for me, maybe it's a hardware issue idk
I've written before, I have these errors, I don't know what to do about them. I downloaded your workflow, and I am getting these errors:
What is your ComfyUI version? That'd be my first guess.
Please read the notes in the workflow as these are addressed in there with fixes.
Change fp16_fast to just fp16 in the model loader
@jtmichels 0.3.41
@vrgamedevgirl Why do you have it set to “fast”? I changed it to fp16, this node goes through, but stops further.
@vrgamedevgirl next error: Prompt outputs failed validation: WanVideoSampler: - Value not in list: scheduler: 'euler/beta' not in ['unipc', 'dpm++', 'dpm++_sde']
Show me a printscreen of your workflow, I'll try to analyze the differences, I can throw it to you in a private message. I want to figure out why it works for everyone else and I don't.
@AlG80 Looks like you need to update the WanWrapper. Please join the discord sever and post in the support channel. I am there more often then I am on CivitAi. And others there can help you if i'm not around.
I had the perfect image to start with using the default prompt. Using a 5070 Ti with 16GB VRAM and 64GB of system ram, Win11, I enabled BlockSwap and set it to 30 blocks. Generation time was 289 seconds. Very happy with the results of the first run. You're doing great work, here!
Top workflows sir. I really appreciate the time you've taking to do these 😊👍
Your very welcome!!
Using the I2V workflow - the image I am using the character changes quite significantly in the final video - is there anyway to reduce the creative freedom Wan/FusionX has in doing this?
@PixelBlitterBoy use the new ingredients workflows I shared and adjust or bypass the mps and detail loras to see if this helps
I concur. Top notch stuff.
...and I'm sure @PixelBlitterBoy meant "ma'am", not "sir". =*
@jjfunktional Thanks!! :)
where I can find the realism boost lora?
The link is in the workflow
This is the best workflow I've downloaded and the video creation is outstanding as well. Well done!
Loving this workflow a lot! The video quality is QUITE amazing for the speed I'm getting. I had some issues using the Wrapper version, but I think it was caused by the memory management. Native has automatic offloading/memory management tmk so it's a lot easier to just plug in the workflow and get started. Thank you for making this, it's so fun to mess around with!! :3
In workflow notes it says "don't use with fusionx basemodel" but i used it with bypassing loras and results are excellent. default preset 5 sec video takes 450'ish seconds. RTX 5060 ti 16gb and 64 gb ram. Don't forget to use triton node unless it will be 1500'ish seconds.
What do you mean by triton node? Do you have a link to it? Does it speed up generations alot? I am on a 16GB card too, 5080.
What do you mean "but I used it with bypassing loras", so you bypassed the loras and just used the FusionX model which has the loras in it?
@moneyacct821 Yes i did. I'm emphasising to notes in the workflow. Just don't care them, they are for old method which uses base wan model with loras. If someone downloaded fusionx basemodel just bypass and use native workflow which is quite fast and high quality.
@durkaudkruak The node i mentioned already in workflow "patch sage attention KJ" and "model patch torch settings'
This workfow is sick, i can get 15 sec vid at 640x480 on 422 secs on my 16gb VRAM card using 14b 480p Q4 K S gguf, mindblowing, this opens the door for long vids. I only miss last frame save and a rgthree lora stack for my own loras, but you can't have everything lol
you can just add another lora node to bring over your own loras if you want! :)
How?? Are you using the default Loras? Can you share your workflow because I have a 16gb Vram card, same model, but when I try to generate anything over 5 seconds it takes 17-20 minutes!
Hey all, I'm using the workflow and I've downloaded all of the loras/models linked within the workflow.
I'm trying a test run on an image that is 1536x1024, but I've setting the resize to 512x768 so I don't OOM, but the problem is it does the first step of 8 in about 20s/it then the second step is nearly 90s/it and then the third is a lot more.
It seems to go up exponentially each step. I abort at 3 knowing that it will likely take an hour or more to complete.
Here's the odd thing. I have a laptop with a 5090 (so 24GB VRAM), 64GB system RAM and the latest Intel CPU. I am only using the 480p model. My VRAM and GPU are both sitting at 99%, but that is not uncommon when I'm running this stuff. I'm at a loss as to what the issue is. I figured that these workflows would be fine with 24GB VRAM and 64GB of RAM.
It should work just fine for you. join the discord server and I can try and help you more there.
Impossible to run this with 8gb vram?
I have heard some people are able to using GGUF
SelfForcing seems to effectively replace Caus and ACC, can you add it to the WF?
you can simply just swap out causvid with the LightX lora.
结果如何呢
Awesome workflow, thanks!
Could you tell me the weight of these five loras are in your fusionX? I just want to reduce the weight of mps(which makes the face look like FLUX), while keeping the rest unchanged
lora key not loaded: diffusion_model.blocks.9.cross_attn.v_img.lora_B.weight
what am i missing? why doesn't it load the loras?
only changed causvid for force self attn
its a false error. You can ignore it.
@vrgamedevgirl Weird, thanks.
So, with a 5090, if you try to go above 81 frames, it OOM's. How can you get longer gens with this?
I'm actually getting pretty impressive 720x1280 i2v results using the kijai 720p checkpoint and VAE in this workflow with a 5090, pumps out i2v's in about 3 minutes even with an additional 4 more loras.
The lowering of MPS to about 2.5 is absolutely critical though or else faces just get warped to an almost completely different character.
I can go above this on my 5090 but I've nnow installed a separate command-line-only Debian OS just for doing AI generation. Then I use comfy on my laptop. Debian barebones only uses like (no joke)
1mb of vram for the entire OS if even that much
hmmm i am able to do 1920x1080 if i enable block swapping. Have you tried doing 40 blocks?
The better way to do this is to take the last frame and do another 81 frames.
@slaad0 This does not always work. The quality gets degraded every time you do this so the longer the video goes on the lower the quality. I have tested this and failed.
Thanks for sharing!
One question: is there a big difference in your "DetailEnhancer" and "RealismBoost" loras? I prior downloaded RealismBoost here and now the link points to DetailEnhancer. Does it make sense, to combine both? In your example workflow (native gguf) you did not use the triggerword "cinemoraX" - is it not needed?
First off, among the many workflows I've used, this one was the fastest and most stable. Thank you for creating it. If possible, could you add an automatic prompt feature like Florence or Ollama? I tried to add it myself by referencing other workflows, but I kept failing. With the simple prompts I wrote, I encountered issues like the face not moving.
I have a GPT that helps created prompts. It is what i used to create all the sample prompts for the videos.-- the link to the gpt is in the description.
I personally have never used these in workflows so am not even familiar with them.
I am not sure what I am doing wrong but all my generations are sped up(like the video is moving too fast), i set up the options correctly, CFG1 & shift 2, the only thing i changed is i swapped out causvid for lightx2v lora.
how many steps? what WF?
Hey OP, in your workflow, i believe the self trained character loras for wan may not be working, due to torch compile, compiling before lora and the rest can be loaded in.
Has your workflow accounted for that? cause from my testing, it may not be the case
I've been messing with this and one of her t2v workflows. They are outstanding. I've only a 16GB GPU, so it takes a while. But, once you get the settings figured this thing is a champion. It's my favorite workflow.
This totally needs the wan 2.2 update. It should be a few changes, since people are saying most of this stuff and the loras still work with 2.2. I'm going to try to download the wan 2.2 i2v template and add most of the stuff from fusion ingredients.
I'm hearing even less steps are needed! Could be insane on a 5090, which already cranked with this workflow while using sage Radials added to it.
If you join the discord server I share workflows there before Posting here. Link to discord server is in the description.