** I've had reports that CosXL is still not supported in A1111 or Forge. I can confirm that it does work in ComfyUI and also reForge.
Thank you to @stamp for this information:
** To import the model into the "Draw Things" app, it has to be imported with model objective v-prediction, but ALSO conditioning noise AND noise discretization DPM.
This is a merge of the awesome BigLove-XL and my GTM_UltimateBlend_XL v2.5_CosXL models. It gives great contrast, colour and creativity to a NSFW model to deliver some stunning results. It is also one of the few model types that can create great night-time images.
It is also capable of SFW, but you may still see the occasional unprompted nudity.
Works best with DPM++2M SDE/Karras or DPM++SDE/SGM Uniform, depending on your taste.
~30 steps is good for initial image, and similar or higher for low denoising upscales.
Keep the CFG low, around 3 or 4 should be enough, but again this is personal taste.
Description
FAQ
Comments (16)
How in the world does it works!? Because all what I get is pixel noise (with all settings like DPM++2 ect ectt ect), is this for Comfy only? Right?
I am having the same issue, cant figure out what is causing it
based on a discussion on one of his other models and the fact when I load the data from one of his posted images and it shows the model name as "GTM_UltimateLove_CosXL" this model has CosXL in it, which does not work in automatic1111 or forge
I've been using it in reForge. I had no idea it still didn't work in A1111 or forge😞
please add a note that this model doesnt work in automatic1111
I've been using it in reForge. I had no idea it still didn't work in A1111 or forge😞
I haven’t really used the model yet, just a test image to see if it would work correctly… and since it did, I’m going to paste the instructions I wrote for the CosXL GTM, to import it in Draw Things, it has to be imported with model objective v-prediction, but ALSO conditioning noise AND noise discretization DPM. As the model is new and a little finicky to get to work, I figured that it’s more helpful to write how to make it work, rather than testing it and reviewing the model’s qualities. :)
Thank you! I'll add that to the description. :)
What all i have - messy pixels.
Did you read the description about supported UIs?
Im using this in Comfy in a 2 stage swap(pony, XL late pass). And all I can say is - what an adventure (zero prompt cohesion, but wow) .I have no idea how to control this so I may need to just work out what works in a standalone before employing it. Thankyou. Edit: SGM Uniform tamed the wild beast
what samplers can we use on draw things app, i am also using an dmd2 4 step lora
I think that lora requires the LCM sampler.
DMD2 likes the “trailing” samplers, usually the Euler A Trailing is the most compatible. As far as I know, Draw Things is either the only one with trailing (and AYS) samplers, or they’re called differently so those are kinda tricky to get help with, as it’s unlikely that people outside of the Discord will know wtf you are talking about. Also, did you read my previous post about importing it in DT? You might just have imported it in a way it doesn’t work :)
@stamp what are the settings for dmd2 lora 4 step for this model
@Kitten123 I’m lazy af, so my settings for most stuff are the same: DMD2 4 steps at 45%, hyper 4 steps at 45%, Euler A Trailing, 8 steps. Guidance 3.5 (so it works with SD 3.5 too lmao), clip skip 1 as I’ve never seen it do anything good otherwise. Shift 1.0 and I prefer to use the scale alike seed mode.



















