It is my pleasure to introduce Mangled Merge Flux to the Civitai community. Continuing the tradition started with Stable Diffusion 2.1, and then SDXL, this will be the new home for crazy merge experiments done with the Flux.1 model architecture.
V1
Mangled Merge V1 is a merge of Mangled Merge Flux Matrix, Mangled Merge Flux Magic, PixelWave 03, FluxBooru v0.3, and Flux-dev-de-distill, to create a model that offers the aesthetics of 808 merged loras, with the styles of PixelWave, the knowledge of the Booru dataset and the functionality of a dedistilled model. Loras work fine with this model, CFG is best between 1 and 3.5, Flux Guidance doesn't work, but negative prompts work great, and I HIGHLY recommend using dynamic thresholding.
For a comparison between Mangled Merge V1 and the Flux.1 Dev model, check out this post.
Disclaimer:
Civitai only lets me choose from limited pruning options when uploading and doesn't let me choose the same option twice, so in order to keep everything on 1 page, I had to choose from what they had available. But here are the real quants and sizes.
BF16 - 22.17 Gb
Q8_0 - 11.85 GB
Q6_K - 9.18 GB
Q5_K - 7.85 GB
Q4_K - 6.46 GB
v0:
This first version is more of a preliminary learning starting point. I plan on exploring and even creating new merge methods as I continue to experiment, however v0 is old school brute force merging and folding. I have tried working on a Della method, but so far, I am getting OOM errors due to the sheer size of the Flux model structure, even with 24g vram. I have some new angles I plan on trying this week however. More to come.
This was also a learning process for llama quantization and Schnell conversion. I have quantization down, but the Schnell conversion for v0 was just a simple merge of the Flux Dev to Schnell 4 step LoRA. I plan on exploring new techniques for the conversion process in the future as well.
For a list of loras included in this model please follow this Google Sheets Link.
Description
V1
Mangled Merge V1 is a merge of Mangled Merge Flux Matrix, Mangled Merge Flux Magic, PixelWave 03, FluxBooru v0.3, and Flux-dev-de-distill, to create a model that offers the aesthetics of 808 merged loras, with the styles of PixelWave, the knowledge of the Booru dataset and the functionality of a dedistilled model. Loras work fine with this model, CFG is best between 1 and 3.5, Flux Guidance doesn't work, but negative prompts work great, and I HIGHLY recommend using dynamic thresholding.
For a comparison between Mangled Merge V1 and the Flux.1 Dev model, check out this post.
Disclaimer:
Civitai only lets me choose from limited pruning options when uploading and doesn't let me choose the same option twice, so in order to keep everything on 1 page, I had to choose from what they had available. But here are the real quants and sizes.
Edit 11/16/2024:
Due to popular demand, I've removed the Q8_0 GGUF and replaced it with the FP8 Safetensor.
BF16 - 22.17 Gb
FP8 Safetensor - 11.08 GB
Q6_K - 9.18 GB
Q5_K - 7.85 GB
Q4_K - 6.46 GB
FAQ
Comments (27)
can you post some images for comparing vs FLUX_DEV original
Sure. Working on it now. I will respond again once it's set.
Check below or linked here:
https://civitai.com/posts/8666955
@pmango300574 thx , put the link into your description ;)
@sevenof9247 All set! :)
The example images look fantastic! I would love to try out this de-distilled model. Do you plan on uploading non-GGUF FP8 version? I'm not seeing it in the download section. Thanks in advance.
Thank you! Civitai only lets me chose from limited pruning options when uploading and doesn't let me choose the same option twice, so in order to keep everything on 1 page, I had to choose from what they had available. But here are the real quants and sizes. There are 5 files available. See below for a guide and look for file sizes. The file sizes match the quants below.
BF16 - 22.17 Gb
Q8_0 - 11.85 GB
Q6_K - 9.18 GB
Q5_K - 7.85 GB
Q4_K - 6.46 GB
Thanks for the quick reply. Maybe I was not too clear. I can see the table, but I don't find a "pure" non-quantized FP8 version of the model in .safetensors format which are normally ~11.1GB in size. The only available FP8 variant is in GGUF format which I don't like to use since my GPU (RTX 3090) is capable of running "pure" FP8 models just fine. If you check other Flux model creators, they usually provide an FP8 model in standard non-quantized .safetensors format. I hope that makes sense now. :)
@mmdd2543 Ahhh. Ok I understand. Let me see if I can make one ...
@mmdd2543 OK I got it figured out. You can find it here:
https://huggingface.co/ManglerFTW/Mangled_Merge_Flux_V1_Dedistilled/tree/main
@pmango300574 Awesome! I can't wait to try it out!
@pmango300574 Awesome! I can't wait to try it out!
The example images look great, well done!
Thank you!
I can't get the example image to look like that with forge anyway!
Hey, I'm using comfyui mainly. I haven't used Forge, but later on I will try to set it up and see what happens. Keep in mind this is a dedistilled model so you will want to use regular CFG. Also a lot of the images in the examples had dynamic thresholding included.
OK I was able to get Forge up and running to test. Seems to run fine in there. Check out the link for how I had it set up.
https://drive.google.com/file/d/1v27hsCS913vcaepXTgRJHtAYCVoDetYR/view?usp=sharing
One other thing. I went into Settings>Compatibility and changed "Try to reproduce the results from external software" to Comfyui.
@pmango300574 Thank you very much for your reply, I was taken in by the Seed: in your example image.
46165870180630, was fascinated. After reading your instructions, I can basically generate similar images using forge. Thanks again for your reply!
@myun7570206 sweet! Anytime!
Your models look amazing, but can you explain/create a list of the prompts and how to activate the loras in your flux models?
Thank you! As far as a list of prompts, I share all of my prompts in the images I post. They all work and more! I usually just go through other image posts and use shared prompts to test. As far as loras, what program are you using the model with?
GGUF is such as slow !!!
please FP8 safetensors
I have them available on my Huggingface.
https://huggingface.co/ManglerFTW/Mangled_Merge_Flux_V1_Dedistilled/tree/main
Ok, no matter what I do but I just can't get this checkpoint to work: every single image I generate with Forge and SwarmUI looks bad, far from the examples shown here. I have clip, vae and text encoder, I'm doing 30 steps, 3.5 normal CFG, tried various combinations of sample and scheduler and I'm not using flux guidance. Is there something I'm missing?
See here for how I had it set up in Forge.
https://drive.google.com/file/d/1v27hsCS913vcaepXTgRJHtAYCVoDetYR/view
@pmango300574 Ok, I finally managed to get it to generate some good images. Thanks for the help!
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
















