This is the conversion to GGUF of the Wan2.2-Remix I2V v2.0 - https://civarchive.com/models/2003153
For v0.8, go to https://civarchive.com/models/2097002
For v2.1, go to https://civarchive.com/models/2347977
For v3.0, go to https://civarchive.com/models/2472759
T2V available in https://civarchive.com/models/2094656
Files available also in https://huggingface.co/BigDannyPt/Wan-2-2-Remix
All were using NSFW-Wan-UMT5-XXL (FP8 version) and Wan 2.1 FP32 VAE
Description
FAQ
Comments (74)
gratidão 🙏🏻 (thanks). this version has fixed the high grainy issue present in the previous version. thanks again.
Hey, quick question, how long does it take for you to convert a single of those model to a GGUF version? And how do you do that?
I'm thinking of writing an article about it.
But to go straight to the point, I have a R3600 Cpu and 32Gb RAM with 50 pagefile (virtual vram) and everything was on a SSD.
This was the first time I did everything on my pc and took 8 hours for the whole conversion and upload to hugginface (I have a script that does all of that)
Started at 16:30 and ended around 00:00 from the upload time of the latest file in hugginface ( went to sleep at this time since the script also switch off the PC)
In Runpod with a 4090 and 80Gb RAM (is cheaper than going to CPU pods) it takes around 1 hours to do it all, including the tests.
The test took me more 3 hours since I have a RX6800.
Now I have to make the math's to see which one is cheaper. My electric bill or runpod.
Damn bro. That's a long time. Please reply here with a link if you write the article, would really appreciate it.
@Ceylon_Ai that's because I'm still working on the scripts that I use to have them, the way I really like.
I finally got some time to make this - https://civitai.com/articles/22952 - be my guest to read it and see if there is anything that should be changed / improved
Does v2 actually respect faces and behave like i2v?
this is the I2V model
@BigDannyPt I know, I mean this is the V2 of the I2V model (I assume?). The original had a face problem, I was asking if this one seems to have fixed that
@mobdik17378 not sure I'm only doing the conversion and do a single test to have a showcase here in CivitAI. You should ask this in the original model's page
Anyone has a good workflow?
I used the template from Comfyui for the I2V, besides that, I think you can search the workflows available in CivitAI
I noticed that with this model the subject always moves his mouth. I tried blocking it with prompts, but it didn't work. Any advice?
Not really, but I think I also saw people complaining about that in reddit for the normal Wan version
@BigDannyPt so any solution? I have the same bug on smoothMix wan model. wanna try this one, but I see there are same problem?
@conferno no, I do not have any solution for it
@BigDannyPt I found a solution. You need NAG nodes before KSampler (as I remember) and increse these values a bit. You can google or search via chatgpt that, was found on reddit
thx for the gguf but i am having massive face drift any time the image is of an asian model. it always changes her features to be european any advices around this issue
not really, try with the full model to see if it happens the same
@BigDannyPt i ended up training a character lora img2Vid lora for Wan2.2 seems to work better now.
@Kary0819 That is helpful , how much time did it take to train ? I checked for resources on YT there seem to be many, any specific "How to" that you can direct me towards ? Would be thankful if you can point me to the right direction.
@Erikom https://www.youtube.com/watch?v=2d6A_l8c_x8 I used AI_Toolkit can this is from the guys Ostris AI who i believe had a part in making it? Im not sure but I think the Trainer was made by them
@Erikom oh it took couple of hours on a 4080 TI onlly have 16gb of VRAM but the toolkit has a low Vram option
a gguf models can suppor NSFW
one of the annoying things is that the characters start twitching and "bouncing" for some reason.
ha yes I found this too. There's so much porn in these loras that they can't help it even when there's no stimulation.
Yeap... very annoying
thx, can u share a workflow? in i2v generation, the faces are changing a lot, any solution?
Don't know, I used the normal I2V from github template to create the videos that are in the model with the text encoder and the VAE in the description.
Nothing else,sso no loras, just the model, clip and vae
@BigDannyPt ok, thx, i can't figure out the workflow on the video, looks like it has 2 workflow in it!
@zereshkealt you only need to get the one from comfyui default template for the I2V for Wan 2.2.
In my video, is the same, but with running twice because I created two videos with two images so it was faster for the model test.
You can also pick up that one and understand what to delete to keep only one, it isn't that hard.
@BigDannyPt i did, and it worked. better than i thought, i was using a 8 step workflow, but this one 2+2, worked. thx.
With Q4 this model quality much worse than with Q3 SWF model with just NSFW lora. Hard flickering.
With Q3 SFW model and lora quality was much better.
Are these Wan2.2 Remix NSFW GGUF models ?
Yes, I think that the SFW doesn't have a 2.0 version
In between which node can I add a LORA (to add spice, such as bodily fluids)? Sorry if this is a stupid question, I am newbie.
You can add it anywhere.
What I could advice is to get a workflow that have an high recommendation, then replace the model node for the gguf.
That's enough.
is there a Q version of NSFW-wan-UMT5-xxl?
because i have 12RAM and 15VRAM.
my system OOM at CLIP loader itself because NSFW-wan-UMT5-xxl-fp8 is too massive for my RAM.
but if i use wan2.1 umt5-xl it does take the load.
I don't think so, and looking around Hugginface, it seems that the bf16 is broken.
what you can do is to raise the paging file in windows, so it uses that space when you run out of RAM, but that will consume your driver storage and it will be very slow when using it.
@BigDannyPt
thanks for helping, i did find. i would suggest to add this page in your above description so if anyone also has problems they can look for these. Q8 gguf worked for me. https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main
@contacttheeddytor144 but that is the normal umt5, not the NSFW umt.
I've done tests with both and I also don't see much difference between them, but the NSFW is the one recommended by the original model.
@BigDannyPt that's true but normal UMT5 does work similar as NSFW with proper wording and statement semantics. there isn't much difference.
with right lora files i was able to still get results. may be not greatly as NSFW_UMT5 but still does the job.
Wait a second, are any light2xv loras baked in? This doesn't seem to be treating high step schedulers correctly
Check the original model for it
Will you do V2.1?
I was not aware of the v2.1, I can do it, when I have time, maybe by tomorrow
@Santodan Thanks mate
@CrystalVisage Already available in https://civitai.com/models/2347977 or my hugginface
@Santodan yeah saw it. Thank you for the model. It works well.
@Santodan it'll be great if you could mention it (link to 2.1) in description of this gguf
@subhanzg added
i cant make it work could someone helpme? i get this WanVideoKsampler
mat1 and mat2 shapes cannot be multiplied (154x2048 and 4096x5120)
I don't think I used that node, but I'm going to assume that is this one - https://github.com/ShmuelRonen/ComfyUI-WanVideoKsampler
I went ahead and did the test with Q6, since my PC doesn't handle Q8 and I didn't wanted to go into runpod, and noticed that the node doesn't accept to have initial and end steps, so it will not work correctly with Wan 2.2 since you first run half the steps in high and then the rest in low
sounds a problem with the clip encoder. try another clip (text encoder).
With all those Wan 2.2 NSFW models my girls start fast humping motion almost from the beginning of the video even if they are alone in the frame. What am I doing wrong ?? No prompt seems to change it :(
can you share a workflow?
I used ComfyUI's template and then replace the model and clip loaders with the GGUF models, nothing more
@Santodan Which ComfyUI template did you use?
Which ComfyUI workflow template did you use?
I always use the default I2V Wan 2.2 template for my tests, the only thing you need to do is to replace the checkpoint and the clip loaders for the GGUF nodes
@Santodan Ok, so I started with the video_wan2_2_14B_I2V. I'm sorry for the newb question - how do I replace the checkpoint and clip loaders?
@kinkyadrenalynn you have to get the COmfyUI-GGUF nodes, then add the GGUF Unet loader and move the connection from the checkpoint / model loader to the gguf.
if you are not using the gguf clip, then you don't need to change the clip one
@kinkyadrenalynn I don't know which you installed, I told you, ComfyUI-GGUF, it isn't an extension, it is a node in comfyui
@Santodan Do you have a screenshot of your workflow?
@Santodan I added the Unet Loader (GGUF) to the video_wan2_2_14B_I2V workflow
@kinkyadrenalynn this is the workflow, just copy the whole text and paste in comfyui window
@Santodan Thank you!
Making progress...
https://i.postimg.cc/Vkms53GN/wan22Remix-I2VGGUFV21-low-Q80-workflow.png
FYI he did one for V3 also!!
https://civitai.com/models/2472759?modelVersionId=2780612
THANK YOU! YOU ARE THE BEST!
Hi, i have a problem, this is my overflow but when I start it it gives a bad ending result, I tryed to use ClipLoader GGUF but it seems to start only when I use the simple one.. ideas? Thanks
https://i.ibb.co/KxrR4jWD/123456.png
https://i.ibb.co/KxrR4jWD/123456.png
yeah, becasue you are using fp8 safetensor in the gguf loader.
gguf loader are for gguf, not for safetensors.
You ahve to donwload the GGUF for the clip and use replace it in the node, or jsut use the normal clip node for that clip
@Santodan thanks a lot for the reply! you are totlly right about the gguf loader gives an error with safetensors however my screenshot was a mistake (I was testing different nodes and took the wrong pic, sorry) the mosaic result occurred using the standard cliploader for the FP8 file, i am still getting this exact mosaic/corrupted output. any ideas on what could be breaking the latents?
@kisskiller20994 can you share me the whole workflow?