This model is an Unstable Diffusion community project and wouldn't have been possible to train without their generous help and resources. A heartfelt thank you to them and everyone who's contributed their data, time, and experience!
You can prompt unlimited generations with this model on mage.space
Please check the 'About this version' tab to the right for recommended positive and negative prompts and some training details.
Some advice ↓
This was trained at 768x768 with aspect ratio bucketing, so resolutions below 768^2 will likely give you bad results that I can't support with this model. Try 768x768, 768x960, and 960x768 for your standard resolutions.
If you're having trouble keeping the face in shot try mentioning their gaze ('looking at viewer' 'looking afar' 'looking to the side' etc.), facial features like their eyes, hair style, or expression, or putting 'head out of frame' in the negative prompt.
This model won't do well with most slang terms as they likely weren't in the tagging. And be careful recycling old prompts. Explore the effect of each tag or at least get rid of weighting and consider starting with a sample image to get a feel for it.
There were no watermarks in the data set, so 'watermark' 'artists name' etc., are meaningless. You don't need to beg it for 'high detail' 'photorealism' '8k' 'uhd' etc. It'll do that right out of the box. Save yourself some token space and prompt for what you want to see.
If you use this model as part of a mix or host it on a generation service, please mention and link back to this page (especially if you're making money off of it)
Description
The UNet and CLIP in v1.1 were detected as broken in the Toolkit extension. This version doesn't have that error. Every sample image has been recreated. There's very little difference if any but this is the cleaner model. Sorry about that.
Recommended (please note, these have changed since version 1.0)
Positive: masterpiece realistic, best high quality
Negative: (worst simple background, jpeg artifacts, bad anatomy, anime, digital illustration, 3d rendering, text, overexposure:1.1)
~8k images were captioned using a combination of styles. The core images used what I call 'consolidated booru tagging' i.e. 'blonde hair, very short hair, undercut' becomes 'very short blonde undercut hair' or some variation of that. Another portion were manually captioned with sentences followed by tags for missing details which the sentence could not capture, they were meant to be tight but descriptive. The final portion used auto booru tagging with a rather large exclusion list. And finally, an intrepid member of the internet passed on hand-tagged images of dongs which were included. A deep thank you for that.
It was trained for ~50 epochs using the EveryDream 2 fine-tuner, a base resolution of 768 (so lower resolutions aren't recommended when prompting) at clip skip 1. Majority of settings were left at default.
FAQ
Comments (16)
I'm using fooocus for image generation but it only supports SDXL base models. Is there any other free software to use this model? (I'm new to the field 😄)
I always use AUTOMATIC1111 but there's tons of options out there, search around
Use an SDXL base model and use this one as refiner. Put the refiner switch at 0.4. Fooocus supports sdxl and sd1.5 models as refiners. Haven't tried it with this one yet but it works perferctly with other sd1.5 models,
Can't get your base model to run on Civit.
Sorry, but I'm not sure why that would be the case. I have "Use on Civitai generation service" checked in the permissions.
@gaydiffusion I've tried hitting the remix button next to one of your images and it won't use your model and I've even clicked on the swap model button and typed Unstable Homoerotic Diffusion in the search bar and it doesn't even bring up your model.
@Bobit Looks like it's an issue of not having enough compute to go around at the moment but I did request UHD get added in DMs, I'll give a holler when it's up but you may notice before I do lol Sorry about that.
Looks like it's back up for on-site generation!
@HovercraftHot6861194Â Yep we're good to go for the most part. Unfortunately, it's not the resolution it was trained at, so the output might be questionable. This is a 768x768 base model, not 512. Still, progress is progress, and I'm told there's something in the development pipeline to change resolutions. Sorry for all the delays on this, guys lol
@gaydiffusion ye I'm trying it now but the output isn't the best. I hope they do come up with a way to change resolution.
I'm taking this down from generation on CivitAI for now as the ability to change the resolution to 768x768, which it trained on, has yet to materialize—sorry guys. I'll keep my ear to the ground and turn it back on when that's ready.
What is the difference between v1.11 and v1.0?
"The UNet and CLIP in v1.0 were detected as broken in the Toolkit extension. This version doesn't have that error. Every sample image has been recreated. There's very little difference if any but this is the cleaner model. Sorry about that."
@gaydiffusion You don't have to be sorry!
Thank you for your answer
@gaydiffusion Then, is it better to download V1.0 instead of V1.11
@Tiger_Rump_Sniffong You should use v1.11
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



















