CivArchive

    This is the conversion to GGUF of the Wan2.2-Remix I2V v2.0 - https://civarchive.com/models/2003153

    For v0.8, go to https://civarchive.com/models/2097002

    For v2.1, go to https://civarchive.com/models/2347977

    For v3.0, go to https://civarchive.com/models/2472759

    T2V available in https://civarchive.com/models/2094656

    Files available also in https://huggingface.co/BigDannyPt/Wan-2-2-Remix

    All were using NSFW-Wan-UMT5-XXL (FP8 version) and Wan 2.1 FP32 VAE

    Description

    FAQ

    Comments (74)

    straytzenscribeNov 9, 2025· 1 reaction
    CivitAI

    gratidão 🙏🏻 (thanks). this version has fixed the high grainy issue present in the previous version. thanks again.

    GlowingGuardianGirlNov 9, 2025
    CivitAI

    Hey, quick question, how long does it take for you to convert a single of those model to a GGUF version? And how do you do that?

    Santodan
    Author
    Nov 9, 2025· 6 reactions

    I'm thinking of writing an article about it.

    But to go straight to the point, I have a R3600 Cpu and 32Gb RAM with 50 pagefile (virtual vram) and everything was on a SSD.

    This was the first time I did everything on my pc and took 8 hours for the whole conversion and upload to hugginface (I have a script that does all of that)

    Started at 16:30 and ended around 00:00 from the upload time of the latest file in hugginface ( went to sleep at this time since the script also switch off the PC)

    In Runpod with a 4090 and 80Gb RAM (is cheaper than going to CPU pods) it takes around 1 hours to do it all, including the tests.

    The test took me more 3 hours since I have a RX6800.

    Now I have to make the math's to see which one is cheaper. My electric bill or runpod.

    Ceylon_AiNov 20, 2025

    Damn bro. That's a long time. Please reply here with a link if you write the article, would really appreciate it.

    Santodan
    Author
    Nov 21, 2025

    @Ceylon_Ai that's because I'm still working on the scripts that I use to have them, the way I really like.

    Santodan
    Author
    Nov 26, 2025· 1 reaction

    I finally got some time to make this - https://civitai.com/articles/22952 - be my guest to read it and see if there is anything that should be changed / improved

    mobdik17378Nov 9, 2025
    CivitAI

    Does v2 actually respect faces and behave like i2v?

    Santodan
    Author
    Nov 10, 2025

    this is the I2V model

    mobdik17378Nov 10, 2025· 1 reaction

    @BigDannyPt I know, I mean this is the V2 of the I2V model (I assume?). The original had a face problem, I was asking if this one seems to have fixed that

    Santodan
    Author
    Nov 10, 2025· 1 reaction

    @mobdik17378 not sure I'm only doing the conversion and do a single test to have a showcase here in CivitAI. You should ask this in the original model's page

    HanaShiinaNov 10, 2025
    CivitAI

    Anyone has a good workflow?

    Santodan
    Author
    Nov 10, 2025· 1 reaction

    I used the template from Comfyui for the I2V, besides that, I think you can search the workflows available in CivitAI

    lucal97Nov 12, 2025
    CivitAI

    I noticed that with this model the subject always moves his mouth. I tried blocking it with prompts, but it didn't work. Any advice?

    Santodan
    Author
    Nov 13, 2025

    Not really, but I think I also saw people complaining about that in reddit for the normal Wan version

    confernoNov 27, 2025

    @BigDannyPt so any solution? I have the same bug on smoothMix wan model. wanna try this one, but I see there are same problem?

    Santodan
    Author
    Nov 28, 2025· 1 reaction

    @conferno no, I do not have any solution for it

    confernoDec 26, 2025

    @BigDannyPt I found a solution. You need NAG nodes before KSampler (as I remember) and increse these values a bit. You can google or search via chatgpt that, was found on reddit

    Kary0819Nov 13, 2025
    CivitAI

    thx for the gguf but i am having massive face drift any time the image is of an asian model. it always changes her features to be european any advices around this issue

    Santodan
    Author
    Nov 14, 2025

    not really, try with the full model to see if it happens the same

    Kary0819Nov 14, 2025

    @BigDannyPt i ended up training a character lora img2Vid lora for Wan2.2 seems to work better now.

    ErikomNov 16, 2025

    @Kary0819 That is helpful , how much time did it take to train ? I checked for resources on YT there seem to be many, any specific "How to" that you can direct me towards ? Would be thankful if you can point me to the right direction.

    Kary0819Nov 16, 2025

    @Erikom https://www.youtube.com/watch?v=2d6A_l8c_x8 I used AI_Toolkit can this is from the guys Ostris AI who i believe had a part in making it? Im not sure but I think the Trainer was made by them

    Kary0819Nov 16, 2025

    @Erikom oh it took couple of hours on a 4080 TI onlly have 16gb of VRAM but the toolkit has a low Vram option

    dragomoboss270Nov 14, 2025
    CivitAI

    a gguf models can suppor NSFW

    zombieleaverNov 14, 2025· 4 reactions
    CivitAI

    one of the annoying things is that the characters start twitching and "bouncing" for some reason.

    QuestorDec 9, 2025· 2 reactions

    ha yes I found this too. There's so much porn in these loras that they can't help it even when there's no stimulation.

    magnomabreu630Jan 12, 2026

    Yeap... very annoying

    zereshkealtNov 15, 2025· 2 reactions
    CivitAI

    thx, can u share a workflow? in i2v generation, the faces are changing a lot, any solution?

    Santodan
    Author
    Nov 16, 2025· 1 reaction

    Don't know, I used the normal I2V from github template to create the videos that are in the model with the text encoder and the VAE in the description.

    Nothing else,sso no loras, just the model, clip and vae

    zereshkealtNov 16, 2025

    @BigDannyPt ok, thx, i can't figure out the workflow on the video, looks like it has 2 workflow in it!

    Santodan
    Author
    Nov 16, 2025· 1 reaction

    @zereshkealt you only need to get the one from comfyui default template for the I2V for Wan 2.2.

    In my video, is the same, but with running twice because I created two videos with two images so it was faster for the model test.

    You can also pick up that one and understand what to delete to keep only one, it isn't that hard.

    zereshkealtNov 16, 2025

    @BigDannyPt i did, and it worked. better than i thought, i was using a 8 step workflow, but this one 2+2, worked. thx.

    velantegNov 23, 2025· 2 reactions
    CivitAI

    With Q4 this model quality much worse than with Q3 SWF model with just NSFW lora. Hard flickering.
    With Q3 SFW model and lora quality was much better.

    soyv4Nov 30, 2025
    CivitAI

    Are these Wan2.2 Remix NSFW GGUF models ?

    Santodan
    Author
    Dec 1, 2025

    Yes, I think that the SFW doesn't have a 2.0 version

    DaldoDec 24, 2025
    CivitAI

    In between which node can I add a LORA (to add spice, such as bodily fluids)? Sorry if this is a stupid question, I am newbie.

    Santodan
    Author
    Dec 25, 2025

    You can add it anywhere.

    What I could advice is to get a workflow that have an high recommendation, then replace the model node for the gguf.

    That's enough.

    NOTORIOUS_EDDYJan 9, 2026
    CivitAI

    is there a Q version of NSFW-wan-UMT5-xxl?
    because i have 12RAM and 15VRAM.
    my system OOM at CLIP loader itself because NSFW-wan-UMT5-xxl-fp8 is too massive for my RAM.
    but if i use wan2.1 umt5-xl it does take the load.

    Santodan
    Author
    Jan 10, 2026

    I don't think so, and looking around Hugginface, it seems that the bf16 is broken.
    what you can do is to raise the paging file in windows, so it uses that space when you run out of RAM, but that will consume your driver storage and it will be very slow when using it.

    NOTORIOUS_EDDYJan 11, 2026

    @BigDannyPt
    thanks for helping, i did find. i would suggest to add this page in your above description so if anyone also has problems they can look for these. Q8 gguf worked for me.  https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main

    Santodan
    Author
    Jan 12, 2026

    @contacttheeddytor144 but that is the normal umt5, not the NSFW umt.

    I've done tests with both and I also don't see much difference between them, but the NSFW is the one recommended by the original model.

    NOTORIOUS_EDDYJan 19, 2026

    @BigDannyPt that's true but normal UMT5 does work similar as NSFW with proper wording and statement semantics. there isn't much difference.
    with right lora files i was able to still get results. may be not greatly as NSFW_UMT5 but still does the job.

    EchoHeadacheJan 27, 2026
    CivitAI

    Wait a second, are any light2xv loras baked in? This doesn't seem to be treating high step schedulers correctly

    Santodan
    Author
    Jan 28, 2026

    Check the original model for it

    CrystalVisageJan 28, 2026
    CivitAI

    Will you do V2.1?

    Santodan
    Author
    Jan 29, 2026

    I was not aware of the v2.1, I can do it, when I have time, maybe by tomorrow

    CrystalVisageJan 30, 2026

    @Santodan Thanks mate

    Santodan
    Author
    Jan 30, 2026· 1 reaction

    @CrystalVisage Already available in https://civitai.com/models/2347977 or my hugginface

    CrystalVisageFeb 1, 2026

    @Santodan yeah saw it. Thank you for the model. It works well.

    subhanzgFeb 1, 2026

    @Santodan it'll be great if you could mention it (link to 2.1) in description of this gguf

    Santodan
    Author
    Feb 2, 2026

    @subhanzg added

    BlackCourses777Feb 2, 2026
    CivitAI

    i cant make it work could someone helpme? i get this WanVideoKsampler

    mat1 and mat2 shapes cannot be multiplied (154x2048 and 4096x5120)

    Santodan
    Author
    Feb 3, 2026

    I don't think I used that node, but I'm going to assume that is this one - https://github.com/ShmuelRonen/ComfyUI-WanVideoKsampler

    I went ahead and did the test with Q6, since my PC doesn't handle Q8 and I didn't wanted to go into runpod, and noticed that the node doesn't accept to have initial and end steps, so it will not work correctly with Wan 2.2 since you first run half the steps in high and then the rest in low

    kcouuockFeb 15, 2026

    sounds a problem with the clip encoder. try another clip (text encoder).

    bbqqooqq1216Feb 7, 2026
    CivitAI
    Hello. When I link Wan2.2-Remix I2V GGUF v2.0_Q80.gguf to the Wan2.2-Remix-I2V-Ai Verse workflow in comfyui, the screen is broken, blurry, and the video output is fluorescent. Does anyone know how to fix this? I've been researching this with gemini for several days, but I can't find a solution. Please help me!!

    mjanek20Feb 8, 2026
    CivitAI

    With all those Wan 2.2 NSFW models my girls start fast humping motion almost from the beginning of the video even if they are alone in the frame. What am I doing wrong ?? No prompt seems to change it :(

    navyguy757Feb 12, 2026
    CivitAI

    can you share a workflow?

    Santodan
    Author
    Feb 12, 2026

    I used ComfyUI's template and then replace the model and clip loaders with the GGUF models, nothing more

    kinkyadrenalynnMar 12, 2026

    @Santodan Which ComfyUI template did you use?

    kinkyadrenalynnMar 12, 2026
    CivitAI

    Which ComfyUI workflow template did you use?

    Santodan
    Author
    Mar 12, 2026

    I always use the default I2V Wan 2.2 template for my tests, the only thing you need to do is to replace the checkpoint and the clip loaders for the GGUF nodes

    kinkyadrenalynnMar 12, 2026

    @Santodan Ok, so I started with the video_wan2_2_14B_I2V. I'm sorry for the newb question - how do I replace the checkpoint and clip loaders?

    Santodan
    Author
    Mar 12, 2026

    @kinkyadrenalynn you have to get the COmfyUI-GGUF nodes, then add the GGUF Unet loader and move the connection from the checkpoint / model loader to the gguf.

    if you are not using the gguf clip, then you don't need to change the clip one

    Santodan
    Author
    Mar 12, 2026· 1 reaction

    @kinkyadrenalynn I don't know which you installed, I told you, ComfyUI-GGUF, it isn't an extension, it is a node in comfyui

    kinkyadrenalynnMar 13, 2026

    @Santodan Do you have a screenshot of your workflow?

    kinkyadrenalynnMar 13, 2026

    @Santodan  I added the Unet Loader (GGUF) to the video_wan2_2_14B_I2V workflow

    Santodan
    Author
    Mar 13, 2026· 1 reaction

    @kinkyadrenalynn this is the workflow, just copy the whole text and paste in comfyui window

    https://rentry.co/scqok9ha

    kinkyadrenalynnMar 13, 2026

    @Santodan Thank you!

    TrafficMeanyMar 24, 2026
    CivitAI

    FYI he did one for V3 also!!

    https://civitai.com/models/2472759?modelVersionId=2780612

    THANK YOU! YOU ARE THE BEST!

    kisskiller20994Apr 12, 2026
    CivitAI

    Hi, i have a problem, this is my overflow but when I start it it gives a bad ending result, I tryed to use ClipLoader GGUF but it seems to start only when I use the simple one.. ideas? Thanks
    https://i.ibb.co/KxrR4jWD/123456.png
    https://i.ibb.co/KxrR4jWD/123456.png

    Santodan
    Author
    Apr 13, 2026

    yeah, becasue you are using fp8 safetensor in the gguf loader.
    gguf loader are for gguf, not for safetensors.

    You ahve to donwload the GGUF for the clip and use replace it in the node, or jsut use the normal clip node for that clip

    kisskiller20994Apr 13, 2026

    @Santodan thanks a lot for the reply! you are totlly right about the gguf loader gives an error with safetensors however my screenshot was a mistake (I was testing different nodes and took the wrong pic, sorry) the mosaic result occurred using the standard cliploader for the FP8 file, i am still getting this exact mosaic/corrupted output. any ideas on what could be breaking the latents?

    Santodan
    Author
    Apr 14, 2026

    @kisskiller20994 can you share me the whole workflow?

    Checkpoint
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,010
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/8/2025
    Updated
    5/17/2026
    Deleted
    -