Matrix Lightning
Matrix Lightning is a merge of Mangled Merge Matrix into the SDXL Lightning 4 step model released upon request. I recommend using the following settings based on the Samplers you want to use.
euler_ancestral_cfg_pp_beta:
CFG Scale: 1
FreeU_V2:
b1: 1
b2: 1
s1: 1.3
s2: 1.3
heunpp2/deis:
CFG Scale: 2
DynamicThresholdingFull:
mimic scale: 1.01
threshold percentile: 1
mimic mode: Power Down
mimic scale min: 1
CFG mode: Power Down
CFG scale min: 1.3
sched val: 1.5
separate feature channels: Disable
scaling startpoint: ZERO
variability measure: STD
interpolate phi: 1
Perturbed Attention Guidance:
Scale: .5
FreeU_V2 (huenpp2):
b1: 1.25
b2: 1.25
s1: .5
s2: .5
FreeU_V2 (deis):
b1: 1
b2: 1
s1: 1.3
s2: 1.3
Matrix
Mangled Merge XL Matrix is the newest result of merge experiments for the Mangled Merge Family. This new version has 3850 total loras merged which is 1050 more than the last one. The main focus was to turn Magic which was 2D focused into a more photorealism based model.
I used 3 different combinations of Dare/TIES block merging for this one.
Concepts:
Concepts were merged into the following blocks:
label, input blocks 2, 7, and 9, middle blocks 0, and 1, output blocks 2, 4, and 9, and out all at strength of 1. input blocks 8, and 10, middle block 2, and output blocks 1, and 3 at .25 strength. Ties at "sum" and method at "cosine".
Styles:
Styles were merged into the following blocks:
label, input blocks 5, 7, 8, 9, middle block 0, output blocks 0, 1, and 2, and out all at strength 1. Ties at "sum" and method at "cosine".
Text:
I don't think this improved too much but it was an experiment I wanted to test.
merged into label, input blocks 4, 5, 7, 8, 9, 10, middle block 0, output blocks 0, 1, 2, 4, 5, and out all at strength 1. Ties at "sum" and method at "cosine".
Interestingly enough, with all the merging, it still manages to do the 2D that was merged in with the Magic version quite well. The smoothing process after every 50 lora merges were still different every time, but I learned a few techniques that make the process a little bit more understandable.
I noticed the strength of the lora in the merging process also matters a lot. At model/clip strength 1 the effect is very weak. So for this version I switch to model strength 7 and clip strength at 6. Clips were merged with Dare/TIES at .2 strength using the slerp method, although I think the Clip may work better merged with the gradient method which I will test for the next version.
I think that covers everything.
Enjoy!
Magic:
Magic is the first of a two part experiment in making specialized models. To make Magic, I took Mangled Merge XL v4.0 and merged 500 2D based style loras (2800 total now) into it using MBW Dare/TIES with 50% into input block 8 and output block 0, and 100% into output block 1. Ties were set to count and gradient was the method used. This first version was focused on 2D while the next model will be focused on realism. I plan to merge the two together afterwards for Mangled Merge XL v5.0.
This model does well within a CFG range of 2-7. For more realism, go for the lower ranges, while the higher ranges work better for 2D, machines, or more complicated prompts. You will also see better outputs when putting styles first in your prompt.
Enjoy!
V4:
This update has 2300 loras merged in total. That's 500 more than V3. I didn't smooth this last one quite as much which has led to a little bit more bias towards photo/hyper realism. I decided not to smooth for styles in this one as I really enjoyed seeing the difference and added detail in some of the outputs.
The model seems a little more sensitive now but also more powerful and random from my early tests. Unless you're trying anime styling, it's best to steer clear from booru tags like 1girl, or score_whatever. Also, the model seems to give the most attention to the beginning tags so you can use them for styles, although it is more difficult to get illustrations in this one. It seems like it's more versatile in other ways now however. Some CFG scales have changed for certain subjects and I highly recommend using tools like AYS, FreeU, and PAG to really get the most out of your outputs.
Enjoy!
V3:
It is my pleasure to introduce version 3.0, the next iteration of the Mangled Merge XL series. I've spent some time looking into the DARE/TIES model merging method and holy moly what a difference! This model is a continuation of Mangled Merge XL V2.0 with an additional 600 loras merged in with the DARE/TIES method making this model clock in at a total of 1800 loras! Once I get a better grasp on a reliable smoothing method, I am thinking about starting from the beginning and remerging with a more improved (and automated) workflow.
The model excels in just about all styles, output is much improved from V2.0. Furry animals are fixed, 1girl and booru tags work, and the model is really good at pixel art as well. Whatever style you are looking for, it helps to put it at the very beginning of your prompt. The first 3 example images show what I mean in their respective prompts. CFG scale is good between 1.5 and 6.5.
The main method for merging loras was done in ComfyUI with the help of the ComfyUI-DareMerge nodes by 54rt1n. I added attn_only normalization to the lora+base model and used that as an attention gradient along with a magnitude masker between the base and model B to perform an Advanced/DARE merge at 0% drop with 2 iterations and rescale on. Then I injected gaussian noise into the base model with ratio: .98, mean: .05, and std: .01 and merged that into the first merge with a Block/DARE merger node and using the same magnitude masker with a .9 drop rate, rescale off, 1 iteration, only to the input block. I also replaced the clip from version 2.0 with the clip from the base and didn't change it during the merging process which makes for much better prompt adherence. This method allowed me to merge about 180 models before I started to see anything wonky happening.
The smoothing process was done by hand every 150 merges. I haven't found a method that I can use every time for automation just yet but I hope to have one by the next version. For the most part the smoothing method consists of taking the starting model (0), then the model 1/3 in (50) the model 2/3 in (100) and the last merge (150), then doing a simple merge on each one with 10% of the base to smooth, then I merged the 4 together with the Attention/DARE node in various combinations and weights until I found the most balance in test runs. The order in which the models were merged and the weights were different with every 150 merges.
Align Your Steps was used on the scheduler during tests and for all example images.
Enjoy!
V2:
I am happy to introduce Mangled Merge XL version 2.0 to the Civitai community. This new version includes 1200 loras that have been carefully merged in to "train" the model. That's 380 more than the previous Dark version! Mangled Merge XL 2.0 also includes Hillside, Clarity, Colorful, Realistic Stock Photo, Pixelwave, Realism from Hades, Pilgrim 2D, and Crystal Clear.
Usage:
Mangled Merge XL does well with a CFG range between 2 and 7. There are some tokens that look much better with a low CFG score, but you can also go between 5 and 7 for the model to follow your prompt more closely if it is more complicated.
The model tends to stick more to realism but can do paintings and illustrations, it's just a little harder to get them in it's current state. I want to balance it out more but I thought 1200 was a nice number to release the model with. There was/is no goal to make this model more one way or another as I am mostly working towards prompt adherence and aesthetics.
Mangled Merge XL 2.0 also likes full body type long shots. It does close up portraits but if you don't specify, it will tend to output closer to full body or further away pictures. Luckily, eyes and faces come out great for this model with distance shots. Hands are decent but not perfect. But you can always touch up with some inpainting.
If you are going to use strong weights, I suggest lowering the CFG or else outputs may look a little fried. Also "fur" and "1girl" can give weird outputs. Fur needs a low CFG and 1girl can potentially give unreliable outputs. It's better to use "a woman" or something along those lines. The model is prompt sensitive so it will output what is in your prompt and for the most part it doesn't output what's not in your prompt. So if you don't specify a background, it is likely to give you a white or otherwise plain background.
Disclaimer:
If you are trying to reproduce the example images I have provided, I am using the AI Diffusion extension for Krita which has some generation options that aren't available to add to the Civitai prompt information. Civitai only allows whole numbers for in the CFG scale inputs, but some of these models are 6.5 or 5.2 or something like that, so you may need to dial the CFG scale in to get the exact output. Civitai also doesn't have an option for the sampler I used which is Turbo/Lightning Merge - DPM++ SDE which requires the 8-steps lora. All were produced with 12 steps rather than 8 however because I thought it gave better outputs. All images were also upscaled with an additional refining process.
I think that covers about everything. Have Fun!
V1.1 Dark
Quick update with an additional 80 loras to help give little more strength to tokens like "chiaroscuro lighting, and low key" for more dramatic lighting in the output images
V1:
Mangled Merge XL is the product of an experiment to see if it was possible to "train" a base model by merging loras into it and what method to use that prevents the model from breaking. After merging approximately 760 loras and 3 trained models (Cinevision, Juggernaut, and HaveAll), the model is still stable for more merges and gives beautiful outputs.
Mangled Merge XL tends to do well with a CFG score between 3 and 8. The goal was to make it a good general use model made with a broad range of loras that I thought looked pretty cool. With that said, there is nothing explicit in here as far as adult material. There is nudity but that's about it. If you would like to have a more NSFW focus, feel free to merge it with your favorite NSFW models.
The model also doesn't do very well with strongly weighted tokens or crazy resolution sizes. Sticking to the native SDXL resolutions will give nice results however.
Note: All sample images were hit with a refined upscaling process.
Have fun! Any feedback is welcome.
Merging process:
There isn't anything too complicated or scientific that went into the process. It was basically trial and error to see what the best results were and how to keep everything stable enough to continue merging without the model breaking. I didn't go crazy with block weights or anything like that, but feel free to build upon the following process to produce even better models!
1. Merge 10 loras into an SDXL 1.0 base model using the Merge Lycoris tool in Kohya_ss. 50% weight is usually fine here but feel free to adjust.
2. Take the 10 lora model and merge it back into the SDXL base model using the Merge Checkpoint feature in Auto1111 to smooth out anything that went awry during the merging process in step 1. A lora will hold it's style throughout the merging process and also can have strong weights so after merging 10 loras, your model might give broken outputs. This step will smooth out those crazy weights to keep the model from breaking. Nothing crazy is needed here. I used weighted sum and the weights were generally 10 Lora 65% / SDXL Base 35%, but sometimes after testing the outcome, I would move the weights for the 10 Lora model down depending on how crazy the model got during the 10 lora merging process.
3. Make a copy of the smoothed model. Hold on to one and merge 10 new loras into the 2nd copy.
4. Smooth out the new 20 lora model using the instructions from step 2 and then merge the 2nd smoothed model (60%) with the 1st smoothed model (40%) to make your first Amalgam.
5. Copy the Amalgam and merge 10 loras into it. Continue the process in steps 1-4 until you get your 2nd Amalgam.
6. Merge Amalgam 2 (60%) with Amalgam 1 (40%) to make your first Amalgamerge.
7. Continue the process in steps 1 - 6 until you get your second Amalgamerge.
8. Merge Amalgamerge 2 (60%) with Amalgamerge 1 (40%) to make Amalgamerge 3
9. Continue until you get another Amalgamerge and merge that with Amalgamerge 3.
10. And so on...
Every now and then, things can go a little more awry than expected. If that happened, I would merge with a select trained checkpoint in order to try and fix some tokens that were getting too wacky.
Description
Matrix
Mangled Merge XL Matrix is the newest result of merge experiments for the Mangled Merge Family. This new version has 3850 total loras merged which is 1050 more than the last one. The main focus was to turn Magic which was 2D focused into a more photorealism based model.
I used 3 different combinations of Dare/TIES block merging for this one.
Concepts:
Concepts were merged into the following blocks:
label, input blocks 2, 7, and 9, middle blocks 0, and 1, output blocks 2, 4, and 9, and out all at strength of 1. input blocks 8, and 10, middle block 2, and output blocks 1, and 3 at .25 strength. Ties at "sum" and method at "cosine".
Styles:
Styles were merged into the following blocks:
label, input blocks 5, 7, 8, 9, middle block 0, output blocks 0, 1, and 2, and out all at strength 1. Ties at "sum" and method at "cosine".
Text:
I don't think this improved too much but it was an experiment I wanted to test.
merged into label, input blocks 4, 5, 7, 8, 9, 10, middle block 0, output blocks 0, 1, 2, 4, 5, and out all at strength 1. Ties at "sum" and method at "cosine".
Interestingly enough, with all the merging, it still manages to do the 2D that was merged in with the Magic version quite well. The smoothing process after every 50 lora merges were still different every time, but I learned a few techniques that make the process a little bit more understandable.
I noticed the strength of the lora in the merging process also matters a lot. At model/clip strength 1 the effect is very weak. So for this version I switch to model strength 7 and clip strength at 6. Clips were merged with Dare/TIES at .2 strength using the slerp method, although I think the Clip may work better merged with the gradient method which I will test for the next version.
I think that covers everything.
Enjoy!
FAQ
Comments (11)
Super fast model. Most images render in less than 15 seconds on a 3080 (no High Res fix). Really really like your output images but struggling to even come close to your results. Using ForgeUI and using your settings and seeds. Most times image is blown out or very noisy. Have tried changing samplers and settings with little success. Do you have any suggestions? Thx
Edit: after a lot of testing I get good image quality using Euler a_DDIM with CFG 2-3 and steps 15-20, but still cannot fully replicate your results.
I'm not too familiar with ForgeUI as I am mostly using Comfy. But I created the images with a few different set ups. Both used the FreeU_V2 node but I have different settings for euler_ancestral_cfg_pp_beta vs heunpp2 and deis. With just the FreeU setup I used the euler_ancestral_cfg_pp_beta sampler and the beta scheduler at CFG 1.
However, I also have some images using the heunpp2 or the deis schedulers. To use these, I added a Dynamic Thresholding Full node and a Perturbed Attention Guidance node with the following setting and set CFG scale to 2.
euler_ancestral_cfg_pp_beta:
CFG Scale: 1
FreeU_V2:
b1: 1
b2: 1
s1: 1.3
s2: 1.3
heunpp2/deis:
CFG Scale: 2
DynamicThresholdingFull:
mimic scale: 1.01
threshold percentile: 1
mimic mode: Power Down
mimic scale min: 1
CFG mode: Power Down
CFG scale min: 1.3
sched val: 1.5
separate feature channels: Disable
scaling startpoint: ZERO
variability measure: STD
interpolate phi: 1
Perturbed Attention Guidance:
Scale: .5
FreeU_V2 (huenpp2):
b1: 1.25
b2: 1.25
s1: .5
s2: .5
FreeU_V2 (deis):
b1: 1
b2: 1
s1: 1.3
s2: 1.3
@pmango300574 Thank you. Will do some more testing. May even install Comfy UI....or not. The interface doesn't work well with my brain. Forge is currently being updated. Hopefully a new version will magically fix my problems. :-) Will let you know if I make progress. Really like the model though. Super fast and flexible. Great work.
@PCsecure Anytime! I was looking into it and it seems Forge has FreeU and Dynamic Thresholding. I don't know about Perturbed Attention Guidance though.
@pmango300574 Yes it does and I entered all your settings and did some more testing. Your model is making me nuts. I can see beautiful possibilities as it renders and wham at the end gets noisy. I have been playing with the different settings but with so many to choose from I take two steps back for every one forward.
I broke down and installed Comfy. Dragged in your boardgame image, updated all the nodes - except - and there is always an except :-) Comfy manager couldn't find the Get_Model node. It found all the others, but without Get_Model I can't run it. Aaaarrrgh. Oh so very close. Any idea where I can get the missing node? Also need Get_Node and Set_Node apparently. Manager can't find them either and I am a 100% newbie with Comfy. Not giving up though. I want to find a way to make this model work for me. The bad images I am getting tease at what might be if I can just make it work. Appreciate your guidance so far though. Thx
Think I found it. Maybe https://github.com/bmad4ever/comfyui_bmad_nodes.git but when I try to install via github url I get the error "This action is not allowed with this security level configuration." This is the two steps backward part of the adventure. I am using Stability Matrix so maybe that has something to do with it? Or not. Sigh.
Edit: changed the config.ini file to get past the security issue. Installed bmad4ever but obviously not the right one as Get_Model is still red and I get a warning the node is missing. Think I will go walk the dog...and hit my head against a tree for a while :-)
@PCsecure The get model set model nodes aren't really necessary. My workflow is a mess because I'm always changing it on the fly. But those nodes can be found here:
GitHub - kijai/ComfyUI-KJNodes: Various custom nodes for ComfyUI
@pmango300574 You rock. We have progress. Node installed successfully. I can now actually generate an image in Comfy. Two new warnings I still need to deal with. Checkpoint save model isn't connected to anything. And Sampler name in Image Saver is an issue for some reason (node has a red border). Still, major leap forward. Thx. Am still testing setting combinations in ForgeUI. There has to be a way to make the model work there. Will keep you updated. Will also send you some buzz as soon as I figure out how :-) Thank you for all your help. Greatly appreciated.
@PCsecure neither of those nodes are needed. The checkpoint save node is what I use to save a merge. Not needed for image generation. And sampler name for image saver is so that my images generated have the sampler used it it's metadata. Also not necessary if you're just trying to generate some images to test.
@pmango300574 Yeah figured that out. Still teasing though. Seeing some great compositions that hint at interesting interpretations of my prompts but Comfy is like walking into a jumbo jet cockpit. Waaaay too many settings :-) Going to be a steep learning curve to know what to adjust to get what I want. Right now either blowing out the image or underdeveloping it. Guess I know what I am studying over the next month.
Edit: How do you decide FreeU, Dynamic threshold settings, etc. settings? Having a hard time finding a good tutorial on the subject. Also, tried to take one of your images to practice with while blindly fiddling with changing unfamiliar settings and I was able to replace the frog on the flower with a dragonfly on a sunflower but it lacked the "crispness" of your image.
Edit#2: model is very good for animals. I don't know comfy well enough to add Adetailer and multiple LORA nodes unfortunately so posting what turned out well. Would appreciate any advice on how to improve sharpness and fix faces. Do you have another workflow you can share ? :-)
Possibly the fastest model I have used. Some great results. Many thanks for your hard work.

















