CivArchive
    Wan2.1_FusionX -The LoRa - Phantom_FusionX_LoRa
    NSFW

    📢 7/1/2025 Update!

    New: FusionX Lightning Workflows

    Looking for faster video generations with WAN2.1? Check out the new FusionX_Lightning_Workflows — optimized with LightX LoRA to render videos in as little as 70 seconds (4 steps, 1024x576)!

    🧩 Available in:
    • Native • Native GGUF • Wrapper
    (VACE & Phantom coming soon)

    🎞️ Image-to-Video just got a major upgrade!!!!!!
    Better prompt adherence, more motion, and smoother dynamics.

    ⚖️ FusionX vs Lightning?
    Original = max realism.
    Lightning = speed + low VRAM, with similar quality using smart prompts.

    👉 Check it out here


    ☕ Like what I do? Support me here: Buy Me A Coffee 💜
    Every coffee helps fuel more free LoRAs & workflows!


    🚨✨ We finally cooked up FusionX LoRa's!! 🧠💥


    This is huge – now you can plug FusionX into your favorite workflows as a LoRa on top of the Wan base models and SkyReels models!🔌💫

    You can still stick with the base FusionX Model if you already use it, but if you would rather have more control over the "FusionX" strength and a speed boost, then this might be for you.


    ⚡ Speed Boost Example (RTX 5090):

    • FusionX as a full base model: 8 steps = 160s ⏱️

    • FusionX as a LoRA on Wan 2.1 14B fp8 T2V: 8 steps = 120s 🚀


    🧪 Bonus

    You can bump up the FusionX LoRA strength and lower your steps for even more of a speed boost while testing/drafting.
    Example: strength 2.00 with 3 steps = 72s
    Or lower the strength and increase your steps to get a less “FusionX” look. (Experiment with settings)⚡🔍

    This LoRa also works with SkyReels models. (Workflows for that coming soon)


    🔧 What’s Included:

    • T2V (Text to Video) 🎬 – works great with VACE ❄️

    • I2V (Image to Video) 🖼️➡️📽️

    • A dedicated Phantom LoRA 👻

    • Workflows for each can be found HERE and include all the default settings which have been used in the example video's posted.


    Settings have not changed, you still need to follow the same guidelines. Please see the workflows for best practices when it comes to settings, you can experiment at your own risk.


    • 📝 Want better prompts? All my example video prompts were created using this custom GPT:
      🎬 WAN Cinematic Video Prompt Generator
      Try asking it to add extra visual and cinematic details — it makes a noticeable difference.


    💡 What’s Inside this loRa:

    • 🧠 CausVid – Causal motion modeling for better scene flow and dramatic speed boot

    • 🎞️ AccVideo – Improves temporal alignment and realism along with speed boot

    • 🎨 MoviiGen1.1 – Brings cinematic smoothness and lighting (t2v only)

    • 🧬 MPS Reward LoRA – Tuned for motion dynamics and detail

    • Custom LoRAs (by me) – Focused on texture, clarity, and fine details. (These both were set to very low strengths and have a very small impact)


    • ⚠️ Disclaimer:

      • Videos generated using this model are intended for personal, educational, or experimental use only, unless you’ve completed your own legal due diligence.

      • This model is a merge of multiple research-grade sources, and is not guaranteed to be free of copyrighted or proprietary data.

      • You are solely responsible for any content you generate and how it is used.

      • If you choose to use outputs commercially, you assume all legal liability for copyright infringement, misuse, or violation of third-party rights.

      When in doubt, consult a qualified legal advisor before monetizing or distributing any generated content.

    Description

    FAQ

    Comments (119)

    97BuckeyeJun 15, 2025· 1 reaction
    CivitAI

    These are amazing! Thank you for all your work. You've really advanced the ease of use of Wan 2.1. Question: Which one of these loras should be used for a VACE workflow?

    vrgamedevgirl
    Author
    Jun 15, 2025· 1 reaction

    For VACE you will want to use the Text to Video LoRa.

    vrgamedevgirl
    Author
    Jun 15, 2025· 1 reaction

    Just make sure not to use the LoRA along with the FusionX base model. Use the normal Wan model instead. The LoRA is FusionX, if that makes sense. Now you can control the strength for either a speed boost or to make the FusionX effect a little less.

    97BuckeyeJun 15, 2025· 1 reaction

    @vrgamedevgirl Btw... this is Dakhari from your Discord :)

    vrgamedevgirl
    Author
    Jun 15, 2025· 1 reaction

    @97Buckeye well hello!

    pushpiblJun 15, 2025

    所以,我可以抛弃那个十几个G的基础模型了吗,解放我的硬盘空间

    pushpiblJun 15, 2025

    lora和ckpt的效果无差异吗,还是ckpt更好

    5327045Jun 15, 2025· 2 reactions
    CivitAI

    You are amazing! Everything works great

    vrgamedevgirl
    Author
    Jun 15, 2025· 1 reaction

    Yay!!! Very glad it works for you 😀

    aceflier72811Jun 15, 2025
    CivitAI

    Can you please post a picture of your T2V workflow? i load the included wrapper and it has offload device selected and flowmatch causevid in the video sampler? The I2V model is working great but i cant get the wrapper to do anything but hang.

    vrgamedevgirl
    Author
    Jun 15, 2025

    Mine is the same as what I uploaded. Sounds like u might need block swapping?

    aceflier72811Jun 15, 2025· 1 reaction

    @vrgamedevgirl  yeah switched to the regular version and it works fine. Now I'm messing with your lora trying to find the sweet spot. :)

    jayhartfordJun 15, 2025· 1 reaction
    CivitAI

    So excited for these, thank you. Should CFG be set at 1 when using?

    vrgamedevgirl
    Author
    Jun 15, 2025· 2 reactions

    Yes, must be 1 or you will get scary results

    SantanaiR34Jun 15, 2025· 5 reactions
    CivitAI

    Seems really good and an improvement from causvid but for it changes faces too much on certain image and i have to go back to causvid.

    vrgamedevgirl
    Author
    Jun 15, 2025· 1 reaction

    The magic behind this model is the MPS rewards which causes this. Unfortunately removing it defeats the purpose. There is always reactor face swap.

    SantanaiR34Jun 15, 2025· 2 reactions

    @vrgamedevgirl I made multiple try and it seems to mainly be on lower quality image while most of the time hq images generated with sd seems to be fine, so that should be good for me for most stuff.

    psspsspsspssspssJun 15, 2025· 5 reactions
    CivitAI

    Please remove the custom loras. This still messes with appearance too much causing same-face.

    vrgamedevgirl
    Author
    Jun 15, 2025· 2 reactions

    The custom loras are not causing that. It's theMPS rewards lora. Unfortunately that lora is the magic here and removing it would make this model pointless. Your always welcome to go back to the base wan.

    vrgamedevgirl
    Author
    Jun 15, 2025· 1 reaction

    U can always use reactor faceswap after

    TheAororaJun 15, 2025

    @vrgamedevgirl or you can use the wan base model inpaint on 2nd ksampler with 0.35 denoise strength to enhance the face so the face back to normal.

    cme123Jun 15, 2025· 1 reaction

    @vrgamedevgirl Can you share it without the mps reward lora? I can add the mps lora myself and play around with it.

    vrgamedevgirl
    Author
    Jun 15, 2025· 2 reactions

    @cme123 Apologies, but reworking the entire model is quite time-consuming, and I’m currently short on time. I’d recommend trying to stack the LoRAs yourself—Causvid, Accvid, and MoviiGen—and adjusting their settings to your preference.

    Unfortunately, I’m not able to create a custom model for everyone, but I hope this helps get you on the right track!

    If you join the discord server I can direct you to the Lora's that were used.

    playnproto266Jun 15, 2025

    I've taken to using VACE and pulling a random face from a folder downloading a database from something like fairface. Then you have a largely unlimited number of authentic faces. Not celebs, not supermodels, regular people.

    cme123Jun 15, 2025

    @vrgamedevgirl Ok, I'm trying. It seems to work when I reduce the weight of Fusionx and add causvid etc. Thank you. I didn't know that making a new lora was difficult, I wasn't aware of this when I wanted it.

    Thank you for fusionx too.

    psspsspsspssspssJun 16, 2025

    @vrgamedevgirl Can you post a link to MovviGen lora I can't find it anywhere. Also please just post your ratio recipe here, I don't have discord.

    GreenFieldCastleJun 16, 2025

    @vrgamedevgirl I have tested MPS rewards lora before. When its weight is 0.25 or below, most of the time it does not change the appearance of the character. So the fusion model cannot adjust the weights of MPS rewards lora alone, which is indeed a minor issue. Besides, thank you for your work. It's great!

    vrgamedevgirl
    Author
    Jun 16, 2025· 1 reaction

    I plan on posting an "ingredients" workflow that has ALL the loras attached and links to each one and there settings that are used in fusionx so everyone can experiment with different settings on their own. I'll try and get this out after work today

    GreenFieldCastleJun 16, 2025· 1 reaction

    @vrgamedevgirl I said that I had tested MPS rewards lora before, which would change the appearance of characters and could only reduce the weight, but it was not completely accurate. Previously, I only tested anime characters. Later, when testing MPS rewards lora with a weight of 0.8-1, it worked well for real people, but not well for 2D characters. Other than that, the effects were all very good. Thank you again for your model.

    USTCTJun 18, 2025

    @vrgamedevgirl Cau you share Causvid, Accvid loras for WAN-I2V-14B model? Thanks

    vrgamedevgirl
    Author
    Jun 18, 2025

    @USTCT I posted workflows with links to all the loras. It's called fusionx ingredients workflows. Go look at my profile and it should be there

    DJKayFJun 15, 2025
    CivitAI

    Hi, great idea with Lorams when there is little video memory... Can these Lorams be used on other models, not Wan 2.1??? Unfortunately, I have a laptop with 6GB VRAM and the regular model with 14B is very slow for me... (3sec. in 2 hours). Thanks BRO!

    vrgamedevgirl
    Author
    Jun 15, 2025· 1 reaction

    These work with Wan and SkyReels only. But have you tried gguf? you could dry the smallest one. Might work!

    DJKayFJun 15, 2025

    @vrgamedevgirl Sorry, judging by your nickname you are a girl :)). So the usual Wan 2.1 14B model in gguf format reduces the time (3 seconds of video generates in +-17 minutes), also not fast... The most optimal option for me is to use Wan 2.1 models with 1.3B parameters... but unfortunately I did not find a Wan 2.1 model for i2v. And if the 1.3B models are converted to gguf, will it be even faster?

    vrgamedevgirl
    Author
    Jun 15, 2025

    @DJKayF These were trained on 14B, so they won’t work with a 1.3B model. The link to the 14B i2v model is included in the workflow. Unless I’m misunderstanding your question? Feel free to join the Discord channel for more help—the link’s in the description.

    jtmichelsJun 18, 2025· 1 reaction

    Hi I have 100kB VRAM. y wont ur workflows generate 8k HD awesomesauce in 2.5 seconds? Please condense to 100kB thank you.

    crazybabyJun 15, 2025· 3 reactions
    CivitAI

    Thank you for all the work you do, this will be a revolutionary victory.

    vrgamedevgirl
    Author
    Jun 15, 2025

    Your very welcome and thanks for the support! :)

    jpXerxesJun 15, 2025
    CivitAI

    When I was testing to find what lora was the problem (and reported MPS to you) I looked everywhere for a MoviiGen lora. just tried again. I can only find the full model. Are you using a lora, and if so where might I find it?

    vrgamedevgirl
    Author
    Jun 15, 2025

    All models can be found here:
    https://huggingface.co/Kijai/WanVideo_comfy/tree/main

    Just a heads-up — the MoviiGen model isn't included in the i2v or Phantom models/Loras because it negatively impacts motion quality. It's only fused in the Text-to-Video models.

    jpXerxesJun 15, 2025

    @vrgamedevgirl Thanks for the quick response! I had been to that page and saw the full models, but missed the lora until I searched the page for MoviiGen. I'm going to emulate yours as much as I can, and mess with MPS strengths to see what they do. Are you still using:

    AccVid 0.50

    MoviiGen 0.50 (in T2V)

    MPS Rewards 0.70

    CausVidV2 1.00

    Your Realism Boots 0.40

    your unpublished detailer 0.4

    jpXerxesJun 15, 2025

    @vrgamedevgirl Even at 0.2 MPS still saturates the color on skin. Unfortunately I see what you mean about the improved images otherwise with MPS. Not a good tradeoff.

    huwhitememesJun 15, 2025
    CivitAI

    Amazing thank you!

    gxbsyxhJun 15, 2025
    CivitAI

    My graphics card is a 3060ti 8g, and every time I load lora, it goes out of memory (oom) or prompts ERROR lora diffusion_model.blocks.27.ffn.2.weight Allocation on device. Is it because the capacity of i2v 480p fp16 is too large and not suitable?

    vrgamedevgirl
    Author
    Jun 16, 2025

    Your GPU may not be able to handle this. You can try block swapping 40 or using GGUF. But 12 is really the minimum unless you want to wait hours to create a video. I would look into runpod if you really want to try it.

    4458749Jun 20, 2025

    Try loading comfy with the options --low-vram --reserve-vram 0.5 or --reserve-vram 1.0. Don't use block swap or that multigpu node, both are rubbish and slow. Just use native workflow and reserve some of your vram so that you don't get oom when it tries to create the preview or you open another browser tab. Oh and if you also lack system ram, then increase your swapfile size to be very big :)

    use regular ram. use the UnetLoaderGGUFDisTorchMultiGPU node and set "use other vram" to true, set "virtual_ram_gb" to as much regular ram as you actually have. if you still can't do it you're probably cooked, but you could still try turning off sage attention and that might help.

    4598756Jun 16, 2025
    CivitAI

    i have too much slow motion in i2v , any idea ?
    Don't have that with Causvid and Acc only

    akak7952Jun 16, 2025· 2 reactions

    Shift

    4598756Jun 16, 2025

    @AIBeauty4Seen with a cfg of 1, did negative have really an impact ?

    InoSimJun 16, 2025

    For me it's the reverse it's way too fast... like a big length video being played 4 times faster. The lower the steps the faster it gets. Am i missing something there ?

    Edit: after some tests i can help you:
    More CFG = More fast pace (adhere better to lora and prompts)
    Less CFG = Less motion (adhere less to prompts and loras which needs to be tweaked to fit)

    Also you will need to reduce the Start_Latent_Strength and/or End_Latent_Strength to fit.
    This lora is very interesting :)

    vrgamedevgirl
    Author
    Jun 16, 2025

    If you use more than 1 cfg you will get a wonky video. If not then please share what your doing.

    AIBeauty4SeenJun 19, 2025

    @Meandre66 I would give it a try. if your using ComfyUi...I am stilling learning. I am using wan 2.1 on pinokio. If it works...let me know. I am curious to see if it helps

    CharlieBrown0115Jul 15, 2025

    @akak7952 shift what? reduce the shift? or increase the shift?mplease let me know

    CharlieBrown0115Jul 15, 2025

    @Meandre66 did you fix the problem????

    vrgamedevgirl
    Author
    Jul 16, 2025

    Use my new lightning i2v wf. It solves all the problems.

    CharlieBrown0115Jul 16, 2025

    @vrgamedevgirl @vrgamedevgirl  can you provide the link, cant find it

    vrgamedevgirl
    Author
    Jul 16, 2025

    @CharlieBrown0115 https://civitai.com/models/1736052?modelVersionId=1964792

    CharlieBrown0115Jul 16, 2025

    @vrgamedevgirl  look awesome! can i suggest adding reseize image node {DEPRECATD) and any power lora loader... and prompt generator from Florence , lol!!! sorry for asking that much!! sorry sorry

    Lora_AddictJun 16, 2025· 2 reactions
    CivitAI

    What sampler and what scheduler should be used with this?

    vrgamedevgirl
    Author
    Jun 16, 2025· 5 reactions

    the workflows have all the best default settings. I would recommend using them or just looking at them.

    gutpunktJun 16, 2025
    CivitAI

    Thank you! Never used a wan lora, can I use them with the fp16 models?

    vrgamedevgirl
    Author
    Jun 17, 2025

    You sure can!

    JaysowenJun 17, 2025· 10 reactions
    CivitAI

    Please remove MPS from the i2v lora, it totally change the face to another person, thank you!

    vrgamedevgirl
    Author
    Jun 17, 2025· 9 reactions

    I am working on another merge that does not have MPS. It is a long process. (I have a full time job which does take up my entire day) Till then, I'm creating workflows that have have all the Lora's included so you can adjust and remove as wanted. This will be even better since you will be able to adjust all the lora's not just MPS.

    JaysowenJun 17, 2025· 3 reactions

    @vrgamedevgirl I really appreciate your work

    vrgamedevgirl
    Author
    Jun 17, 2025· 5 reactions

    Still working on a new Merge, but for now, you can use one of these new workflows where you have full control of ALL the Lora's that make up FusionX. I think this is even better since you now have full control over them all.

    https://civitai.com/models/1690979?modelVersionId=1914573


    Video walk-throughs are in the description.

    amazingbeautyJun 19, 2025

    what MPS ?

    vrgamedevgirl
    Author
    Jun 20, 2025· 1 reaction

    @amazingbeauty One of the Merged Lora's. It can cause the face to change sometimes in image to video.

    KristirinaJun 24, 2025

    @vrgamedevgirl Nice work! May I know which lora is changing the overall colour of the output? I couldn't figure out which one is giving the orangish tint effect...

    dalivekurumiJun 24, 2025

    @vrgamedevgirl wait for another merge,appreciate:)

    vrgamedevgirl
    Author
    Jun 24, 2025

    @dalivekurumi You can use the ingredients workflow for now as I don't know when I'll be posting a new merge model.

    vrgamedevgirl
    Author
    Jun 24, 2025

    @Kristirina I would not know that, you would have to experiment with the Lora's by using the ingredients workflow i shared.

    yorgashJun 17, 2025
    CivitAI

    Question: should I try this with VACE?

    vrgamedevgirl
    Author
    Jun 17, 2025· 2 reactions

    Sure! Give it a shot! Why not :)

    demuzinc511Jun 20, 2025
    CivitAI

    I want to say upfront that I'm only talking about using this LORA in the context of the original Wan Vace 2.1 14B FP8 (native) + Sage Attention (native) + Fast FP16 Accumulation (native). If I compare using Wan Vace + FusionX LORA vs Wan Vace + the pure Causvid V2 LORA, I would choose the latter, as it distorts the video the least. This LORA introduces some artifacts when used with Wan Vace: slower motion, visual frame skipping, and poor face rendering (I mean worse than the pure Causvid V2 LORA and at high resolution, the distortions are minimal). However, the overall effect isn't bad - it has a kind of "animation" effect, making images and characters much less likely to stay motionless. Also, there's a very slight advantage in generation speed because they use a different Scheduler and Sampler. Inference takes about 50s on RTX 5090 - 832x480.

    vrgamedevgirl
    Author
    Jun 20, 2025

    Sounds like you need to try the ingredients workflow. You can easily take the one from the text to video workflow and bring it over into the vace workflow because then you can bypass MPS which is the node that causes faces issues. You can also bypass the details Lora's in case they are having any negative effects. Just letting you know that is an option. Wrapper vace seeems to also work much better as I have very good results.

    demuzinc511Jun 21, 2025· 1 reaction

    Thanks. I tried that - spent 8 hours on it yesterday. Overall, if you generate in 480P+ resolution, there aren't major issues with faces or anything else. Soon I’ll be testing the LoRA on a huge generated video dataset (10,000+), which should make it easier to identify pros and cons. If you're interested, I can post the results here later.

    vrgamedevgirl
    Author
    Jun 21, 2025

    @demuzinc511 are you training a lora on the dataset? I would love to test it if its open to the public.

    demuzinc511Jun 22, 2025

    I’m not training. I allow users to generate videos for free in my project. What I meant is that project users will generate 10,000+ videos using your LoRA combined with various other LoRAs, including different values provided by Remade. A couple of hours ago, an update went live in my project, and video generation has already started. I'll wait for a large sample of videos and then analyze it. Overall, initial results look very promising.

    demuzinc511Jun 22, 2025

    Offtop. I don't understand why on windows a can get inference time [00:49<00:00, 6.19s/it] (81 frame 832*480), but on linux i get only 65 frames for this time... 8/8 [00:48<00:00, 6.03s/it]

    KristirinaJun 24, 2025· 9 reactions
    CivitAI

    GPT has gone crazy with its emoji usage in recent update.....

    vrgamedevgirl
    Author
    Jul 1, 2025· 1 reaction

    not sure how that pertains to this?

    davedrewhull898Jun 28, 2025
    CivitAI

    The I2V works great for me at 10 steps and full strength. What is a Phantom lora? Also, if your results are really slow mo or lacking movement, use the High Speed Dynamic lora, it really helped! Thank you to both creators!

    vrgamedevgirl
    Author
    Jul 1, 2025

    Its for the Phantom workflow. You can mix images to create a video using the images. Its really awesome. Go check out the workflow, it should be under my lora workflows

    wzr905636Jul 3, 2025
    CivitAI

    I'm really loving this! I’m curious—which LoRAs are fused into the Phantom LoRA? I'd like to know so I can try combining them with other LoRAs myself. Also, is it possible to swap Phantom with lightx2v to improve speed?

    vrgamedevgirl
    Author
    Jul 3, 2025

    This are ingredients workflows that have all the Lora's exposed. https://civitai.com/models/1690979
    I have not had a chance to create a phantom one yet BUT you can easily just go into the text to video workflow and copy the lora's over into a phantom WF. Just don't use a fusionX main model since the Lora's already make up fusionX.

    wzr905636Jul 3, 2025

    I’ve downloaded all your workflows—since only T2V and I2V are available right now, I already understand how they’re combined. I’m just not sure if Phantom LoRA uses a different setup.

    vrgamedevgirl
    Author
    Jul 3, 2025

    @wzr905636 You can take any phantom workflow and just add the lora stack. You can't swap phantom with LightX but you can use the base phantom model with the lora stack that has the lightx lora in it. Just copy the lora's from the current t2v workflow and paste then into a Phantom workflow.

    ClocksmithJul 6, 2025· 1 reaction
    CivitAI

    I have only tried this for T2V. I wish the quality was as good as the FusionX base model because it is noticeably faster but it isn't quite there for me. Looks like it is a good choice for I2V based on the posts people are making, though.

    mx52020Jul 7, 2025
    CivitAI

    how do you use this on pinokio?

    vrgamedevgirl
    Author
    Jul 7, 2025

    I have never used Pinokio, but I bet you can google comfyUI and Pinokio and there will probably be a ton of videos.

    gambikules858Jul 12, 2025· 3 reactions
    CivitAI

    Hello. What is the chekpoint for Phantom lora ?

    TSAHYJul 13, 2025
    CivitAI

    Sorry for the newb question, but the model mentioned in the lora details is Wan Video 14B i2v 720p. What Lora should I use for the GGUF version?

    vrgamedevgirl
    Author
    Jul 13, 2025

    Same lora would be used. If you using a i2v main model use the i2v lora. BUT, i can say you will get much better and faster image to vide quality if you use the new Lightning workflows. Reach out if you need more details.

    InoSimJul 13, 2025

    @vrgamedevgirl What are those lightning workflow ? I'm interested if i can speed up without loosing quality. (At least 5-10% quality) would be great ! Actually i can render 101 frames in about 3 minutes with a 5090 at 624x624 resolution.

    vrgamedevgirl
    Author
    Jul 13, 2025· 1 reaction

    @InoSim This work flow has better everything! https://civitai.com/models/1736052?modelVersionId=1964792
    Just make sure you try it with the default settings and DON"T use a FusionX model as the main model as this is an ingredients WF - the new Lora called "LightX" does not play well with some of the lora's used inside FusionX so just make sure you use the models in the WF. Links are in the WF to all the models and lora's used.

    gambikules858Jul 14, 2025· 5 reactions
    CivitAI

    best result for me Lightning 0.7 + FusionX 0.5

    AIBeauty4SeenJul 17, 2025· 4 reactions
    CivitAI

    Can somebody explain what fusion does? I just want to understand more and see how I can use it. I just don't understand it.

    vrgamedevgirl
    Author
    Jul 17, 2025

    It's a text to video model based off wan 2.1 also has a image 2 video, multi image to video and VACE that lets u use control videos. Reach out on discord for more details. Link in description

    a1161327317Jul 26, 2025· 1 reaction
    CivitAI

    what is pusa

    ProvenFlawlessJul 30, 2025· 4 reactions

    "In da pussa!!!"

    Seeker360Jul 31, 2025

    ProvenFlawless This made me laugh out loud. Family Guy reference?

    GFrostJul 29, 2025· 2 reactions
    CivitAI

    Wan 2.2 is out is there any plans on doing lora for it?

    nathanhalkoJul 29, 2025

    with my preliminary tests, wan2.1 loras seem to work well with 2.2

    GFrostJul 29, 2025

    nathanhalko Well, console says something like that

    lora key not loaded: diffusion_model.blocks.7.cross_attn.k_img.lora_up.weight

    Usally i see that message when lora not lading propperly.

    I try I2V

    GFrostJul 29, 2025

    nathanhalko Hm. Using T2V version doesnt reproduce error.

    vrgamedevgirl
    Author
    Jul 29, 2025· 1 reaction

    I plan to update when I have time. Also the key error is not really an error. Its from the MPS lora. The lora still works.

    GFrostJul 29, 2025

    vrgamedevgirl Thnx. No rush. Just good to know you will update =)

    kennysladefan293Jul 31, 2025

    vrgamedevgirl thanks, the FusionX version of Wan 2.1 is so great that I might not even update to Wan 2.2 but I'd like to try it out anyway just to compare :)

    8531998Aug 2, 2025

    vrgamedevgirl Extremely helpful as always, joining your discord was definitely a good idea. Looking forward to see what you manage to cook up for Wan 2.2. 🙏🏻

    dineshup2669707Nov 7, 2025· 1 reaction

    @nathanhalko where do you place it

    UnsensualBurgerAug 1, 2025· 1 reaction
    CivitAI

    After a little troubleshooting and comparison testing WAN2.2 still works amazingly with this lora.

    kkkdsadasAug 6, 2025

    I don't think so. The version with high noise reduction is completely unusable

    GFrostAug 23, 2025· 1 reaction

    Can u share what did you do excatly?

    fly333Aug 2, 2025
    CivitAI

    For some reason, I can't link this LoRA. When I post a video on Civit, I don't see it listed under resources. I'd love to give you credit, but it just doesn't show up there.

    AIBeauty4SeenAug 15, 2025

    do you mean you cant find the lora file to add it?

    BodhiExPresSep 19, 2025

    I'm having the same issue, it's in the correct folder, just not showing up in the Load LoRA node

    EndlessDreamOnceHumanOct 5, 2025· 2 reactions
    CivitAI

    Fusion X on my RTX 4080super 5 second video in 15 minutes, and WAY better quality. You genius!