CivArchive
    Preview 38026091Preview 37928956Preview 37914979Preview 37914983Preview 37914987Preview 37914978Preview 37914988Preview 37914981Preview 37914992Preview 37914990Preview 37914989Preview 37914977Preview 37914991Preview 37914982Preview 37914984Preview 37914986Preview 37914976

    It is my pleasure to introduce Mangled Merge Flux to the Civitai community. Continuing the tradition started with Stable Diffusion 2.1, and then SDXL, this will be the new home for crazy merge experiments done with the Flux.1 model architecture.

    V1

    Mangled Merge V1 is a merge of Mangled Merge Flux Matrix, Mangled Merge Flux Magic, PixelWave 03, FluxBooru v0.3, and Flux-dev-de-distill, to create a model that offers the aesthetics of 808 merged loras, with the styles of PixelWave, the knowledge of the Booru dataset and the functionality of a dedistilled model. Loras work fine with this model, CFG is best between 1 and 3.5, Flux Guidance doesn't work, but negative prompts work great, and I HIGHLY recommend using dynamic thresholding.

    For a comparison between Mangled Merge V1 and the Flux.1 Dev model, check out this post.

    Disclaimer:
    Civitai only lets me choose from limited pruning options when uploading and doesn't let me choose the same option twice, so in order to keep everything on 1 page, I had to choose from what they had available. But here are the real quants and sizes.


    BF16 - 22.17 Gb
    Q8_0 - 11.85 GB
    Q6_K - 9.18 GB
    Q5_K - 7.85 GB
    Q4_K - 6.46 GB
    v0:
    This first version is more of a preliminary learning starting point. I plan on exploring and even creating new merge methods as I continue to experiment, however v0 is old school brute force merging and folding. I have tried working on a Della method, but so far, I am getting OOM errors due to the sheer size of the Flux model structure, even with 24g vram. I have some new angles I plan on trying this week however. More to come.

    This was also a learning process for llama quantization and Schnell conversion. I have quantization down, but the Schnell conversion for v0 was just a simple merge of the Flux Dev to Schnell 4 step LoRA. I plan on exploring new techniques for the conversion process in the future as well.

    For a list of loras included in this model please follow this Google Sheets Link.

    Description

    V1

    Mangled Merge V1 is a merge of Mangled Merge Flux Matrix, Mangled Merge Flux Magic, PixelWave 03, FluxBooru v0.3, and Flux-dev-de-distill, to create a model that offers the aesthetics of 808 merged loras, with the styles of PixelWave, the knowledge of the Booru dataset and the functionality of a dedistilled model. Loras work fine with this model, CFG is best between 1 and 3.5, Flux Guidance doesn't work, but negative prompts work great, and I HIGHLY recommend using dynamic thresholding.

    For a comparison between Mangled Merge V1 and the Flux.1 Dev model, check out this post.

    Disclaimer:
    Civitai only lets me choose from limited pruning options when uploading and doesn't let me choose the same option twice, so in order to keep everything on 1 page, I had to choose from what they had available. But here are the real quants and sizes.

    Edit 11/16/2024:
    Due to popular demand, I've removed the Q8_0 GGUF and replaced it with the FP8 Safetensor.


    BF16 - 22.17 Gb
    FP8 Safetensor - 11.08 GB
    Q6_K - 9.18 GB
    Q5_K - 7.85 GB
    Q4_K - 6.46 GB

    Checkpoint
    Flux.1 D

    Details

    Downloads
    406
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/2/2024
    Updated
    9/28/2025
    Deleted
    -

    Files

    mangledMergeFlux_v10Dedistilled.gguf

    Mirrors

    mangledMergeFlux_v10Dedistilled.safetensors

    mangledMergeFlux_v10Dedistilled.gguf

    Mirrors

    mangledMergeFlux_v10Dedistilled.safetensors

    mangledMergeFlux_v10Dedistilled.gguf

    Mirrors