CivArchive
    Preview 30851973
    Preview 30853484
    Preview 30851970
    Preview 30851957
    Preview 30851954
    Preview 30851953
    Preview 30851952
    Preview 30851956
    Preview 30851958
    Preview 30851959
    Preview 30851966
    Preview 30851967
    Preview 30851964
    Preview 30851963
    Preview 30851965
    Preview 30851955
    Preview 30851962
    Preview 30851968
    Preview 30851971
    Preview 30851972

    It is my pleasure to introduce Mangled Merge Flux to the Civitai community. Continuing the tradition started with Stable Diffusion 2.1, and then SDXL, this will be the new home for crazy merge experiments done with the Flux.1 model architecture.

    V1

    Mangled Merge V1 is a merge of Mangled Merge Flux Matrix, Mangled Merge Flux Magic, PixelWave 03, FluxBooru v0.3, and Flux-dev-de-distill, to create a model that offers the aesthetics of 808 merged loras, with the styles of PixelWave, the knowledge of the Booru dataset and the functionality of a dedistilled model. Loras work fine with this model, CFG is best between 1 and 3.5, Flux Guidance doesn't work, but negative prompts work great, and I HIGHLY recommend using dynamic thresholding.

    For a comparison between Mangled Merge V1 and the Flux.1 Dev model, check out this post.

    Disclaimer:
    Civitai only lets me choose from limited pruning options when uploading and doesn't let me choose the same option twice, so in order to keep everything on 1 page, I had to choose from what they had available. But here are the real quants and sizes.


    BF16 - 22.17 Gb
    Q8_0 - 11.85 GB
    Q6_K - 9.18 GB
    Q5_K - 7.85 GB
    Q4_K - 6.46 GB
    v0:
    This first version is more of a preliminary learning starting point. I plan on exploring and even creating new merge methods as I continue to experiment, however v0 is old school brute force merging and folding. I have tried working on a Della method, but so far, I am getting OOM errors due to the sheer size of the Flux model structure, even with 24g vram. I have some new angles I plan on trying this week however. More to come.

    This was also a learning process for llama quantization and Schnell conversion. I have quantization down, but the Schnell conversion for v0 was just a simple merge of the Flux Dev to Schnell 4 step LoRA. I plan on exploring new techniques for the conversion process in the future as well.

    For a list of loras included in this model please follow this Google Sheets Link.

    Description

    This first version is more of a preliminary learning starting point. I plan on exploring and even creating new merge methods as I continue to experiment, however v0 is old school brute force merging and folding. I have tried working on a Della method, but so far, I am getting OOM errors due to the sheer size of the Flux model structure, even with 24g vram. I have some new angles I plan on trying this week however. More to come.

    This was also a learning process for llama quantization and Schnell conversion. I have quantization down, but the Schnell conversion for v0 was just a simple merge of the Flux Dev to Schnell 4 step LoRA. I plan on exploring new techniques for the conversion process in the future as well.

    For a list of loras included in this model please follow this Google Sheets Link.