TalmendoXL - Uncensored SDXL
Beta v1.1
Attempt to uncensor SDXL and bias it more towards photorealism and non professional images.
Should be just as good at non nude images as SDXL, but they will be less professional looking and more like an amateur photo with more natural lighting. Might or might not be to your preference.
Tips: Use "natural light" for natural look, "full body shot" works quite well and directions should also be easy to prompt for with "from the side/front/below"
Recommended:
Sampler: Heun, Euler or DPM++ 2M
CFG: 6-10.5
Steps: 30
Resolution: 1024x1024, 896x1152, 832x1216, 768x1344 and their inverse (Same as SDXL Base)
Changelog
1.1
Much higher image quality, overall sharper images
More skin details
More different body types
Resources
Captioning done with: Talmendo/blip2-for-sd (github.com)
Description
Hugely improved image quality, less grain
Less artifacts, like wrong limbs, bad hands
More variety body shapes
FAQ
Comments (49)
Great job ! I’m curious about your dataset… how many photos, how you did the captioning… the training time on which GPU… would you want to share the dataset for collaboration ? I’ll never share/release anything without your consent…
Greatz
Thanks! Sure I can share some details. Training was done on a single 4090 using Kohyas default settings for SDXL. Tried a few different approaches, settled on a low amount of images with very high quality captions vs. many with bad captions.
First run for 1.0 was done on only 55 images for ~8 hours (about 175 Epochs) with manual captions, following this guide: "[STYLE OF PHOTO], photo of a [SUBJECT], [SUBJECT DETAILS], [POSE OR ACTION], [FRAMING], [SETTING/BACKGROUND], [LIGHTING], [CAMERA ANGLE], [CAMERA PROPERTIES]" All images were highres and manually cropped using the same ratios as original SDXL was trained on.
Second run for 1.1 was done on 155 images, including the original ones to introduce more variety and fix the grain issues on 1.0. Same cropping, this time with automated captions using a custom script, making BLIP2 generate a caption as described above. Just needed some cleaning afterwards. Run for ~4 hours.
@talmendo thanks for reply, I’ll process and annotate some picture, if you are interested maybe you would add it to your training dataset 😉
@talmendo Very cool - are you able to share the values you used for each of your placeholders? I'd like to continue training in this format...
@MachineMinded I used the template by PromptGeek, so all credit to him regarding the captioning system. You can find details here I spent over 100 hours researching how to create photorealistic images with Stable Diffusion - here's what I learned - FREE Prompt Book, 182 Pages, 300+ Images, 200+ Prompt Tags Tested. : StableDiffusion (reddit.com) - I made it just a little simpler one example would be:
[candid/professional/selfie], photo of a [woman/man] with [(very) long/short] [ginger/blonde/brunette] hair, [more features like clothing], [action/pose of subject], [close up/upper body shot/full body shot], [location], [natural/studio/soft lighting], [from front/below/side]
@talmendo Can you give an example for the captions?
Something like "professional photo of a dog, with a fluffy tail and black spots, standing on its hind legs, (no idea what you mean by framing), on a beach, with bright morning sunlight, from the side, with a canon blabla?
@talmendo Did you use blip2's visual question/answering to make it generate a particular template?
@MachineMinded Yes, exactly, I put the code on Github for anyone interested https://github.com/Talmendo/blip2-for-sd - Not amazing and needs some manual cleaning afterwards, but works well enough
@talmendo This is awesome - thanks for sharing. I started messing with it last night after you mentioned you were able to use BLIP-2 to generate a certain template. I wasn't using it with diffusers though, yours is much better. I'm going to see if I can take your model and continue to train it with my own dataset.
@talmendo Did you do a fine tune of the model or did you train a LoRA and merge it in?
@MachineMinded It's a fine tune
do you use the refiner?
The refiner is really hit or miss, so only sometimes. It tends to not work well for NSFW images, if it does, then only at low denoise strength 0.1 - 0.2
Can I use this model with a1111?
Yes! Works the same as SDXL
Did you merge the refiner in this model as well? I really can't understand why they released the refiner separately and not bake it directly into 1.0
It's not merged no, you have to use it seperately if you want it. Works well without though
in my experience the refiner only makes sense together with the base model at this point in time. with the trained models I dont miss the refiner..
my results look weirdly pixelated. does anyone know why this might be?
Are you using the default 1.0 VAE? 0.9's VAE looks a little better.
@MachineMinded using this one https://civitai.com/models/101055?modelVersionId=128080
@alex357 I would download the SDXL 0.9 VAE and use it, or find a model with that VAE baked into it.
Same here, even with 0.9 VAE. Using Tiled VAE also makes it slightly worse
Just put it on automatic
I've found this effect happens when rendering lower than 1024 with this model
Lower than 1024 is generally not recommended for SDXL, this isn't different with this model. I recommend using the resolutions both SDXL and this model were trained on, mainly 1024x1024, 896x1152 and 768x1344
@talmendo Even at native 1024x1024 it looks bad for me. Not exactly pixelated, but more like the colors are dithered. This is trying with your included VAE and other SDXL VAEs as well.
@talmendo Looks like Tiled VAE Fast Encoder was the issue for me, turning that off seems to help
@WZ_Gen where do you turn it off?
TAN LINES!!. How can I get rid of the tan lines that are in nearly all results???
I honestly think this is some kind of bias in SDXL. I created a NSFW fine tune and only a few images of mine have tan lines, but sometimes I get them even if I didn't ask. Negative prompt usually helps but my images were tagged as such.
"tan lines" in the neg prompt should do it. All training images with them were tagged as such, so should be easy enough to remove them.
I actually find it's a remnant of underwear. use [underwear] in the negative
@talmendo Does not work.
@ice_fly Does not work.
Trying to run on Draw Things app on an M1 Mac.
It works when I first import the model, but as soon as I quit and restart the app, it will crash anytime I try to run the model
Hey, that's almost impossible to be this model specifically. Does normal SDXL work for you? There's no difference between how those run.
Try using comfyUI if it works on Mac. ComfyUI better manages RAM and VRAM
Weird, but I had the same issue… however it has also popped up on some other SDXL base models for me. Especially if I try to run any LoRAs! Actual SDXL1.0 seems to be more stable.
Drawthings on an M2 MacBook
I'm able to produce images with this model in the stable-diffusion-webui. But I can't seem to fine tune with DreamBooth. I get this error when I try to load the model into DreamBooth:
Extracting checkpoint from /mnt/2TB_SSD/stable-diffusion-webui/models/Stable-diffusion/talmendoxlSDXL_v11Beta.safetensors [2023-09-03 11:53:32,563][DEBUG][dreambooth.dataclasses.db_config] - Saving to /mnt/2TB_SSD/stable-diffusion-webui/models/dreambooth/EESDXL Something went wrong, removing model directory Traceback (most recent call last): File "/mnt/2TB_SSD/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/sd_to_diff.py", line 157, in extract_checkpoint pipe = download_from_original_stable_diffusion_ckpt( File "/home/larry/anaconda3/envs/ldm/lib/python3.9/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1370, in download_from_original_stable_diffusion_ckpt set_module_tensor_to_device(unet, param_name, "cpu", value=param) File "/home/larry/anaconda3/envs/ldm/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 255, in set_module_tensor_to_device new_module = getattr(module, split) File "/home/larry/anaconda3/envs/ldm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'ModuleList' object has no attribute '1' Couldn't find /mnt/2TB_SSD/stable-diffusion-webui/models/dreambooth/EESDXL/working/unet Unable to extract checkpoint! Duration: 00:00:01I'm running on a 4090.
Most definitely a problem with the extension, not the model. I would recommend trying their github for support. Make sure it supports SDXL
doesnt work with 8 GB Nvidia card (3070) too bad
cuda error
Most likely some setting thats not right, works good on my 8 GB 1070
This is just a finetune of SDXL, so if that runs for you, this will too.
work great same conf
How can I load this model using huggingface diffuser library?
Any answer ?
Hasta ahora no me convence SDXL, desenfoca demasiado los fondos
sorry for my english - for my purpose this model is genius. thank you





