CivArchive
    TalmendoXL - SDXL Uncensored Full Model - v1.1-Beta
    NSFW
    Preview 1843629
    Preview 1843627
    Preview 1843628
    Preview 1846916
    Preview 1846915
    Preview 1843631

    TalmendoXL - Uncensored SDXL

    Beta v1.1

    Attempt to uncensor SDXL and bias it more towards photorealism and non professional images.

    Should be just as good at non nude images as SDXL, but they will be less professional looking and more like an amateur photo with more natural lighting. Might or might not be to your preference.

    Tips: Use "natural light" for natural look, "full body shot" works quite well and directions should also be easy to prompt for with "from the side/front/below"

    Recommended:

    Sampler: Heun, Euler or DPM++ 2M

    CFG: 6-10.5

    Steps: 30

    Resolution: 1024x1024, 896x1152, 832x1216, 768x1344 and their inverse (Same as SDXL Base)

    Changelog

    1.1

    • Much higher image quality, overall sharper images

    • More skin details

    • More different body types

    Resources

    Captioning done with: Talmendo/blip2-for-sd (github.com)

    Description

    • Hugely improved image quality, less grain

    • Less artifacts, like wrong limbs, bad hands

    • More variety body shapes

    FAQ

    Comments (49)

    svgeekAug 2, 2023· 3 reactions
    CivitAI

    Great job ! I’m curious about your dataset… how many photos, how you did the captioning… the training time on which GPU… would you want to share the dataset for collaboration ? I’ll never share/release anything without your consent…

    Greatz

    talmendo
    Author
    Aug 2, 2023· 12 reactions

    Thanks! Sure I can share some details. Training was done on a single 4090 using Kohyas default settings for SDXL. Tried a few different approaches, settled on a low amount of images with very high quality captions vs. many with bad captions.

    First run for 1.0 was done on only 55 images for ~8 hours (about 175 Epochs) with manual captions, following this guide: "[STYLE OF PHOTO], photo of a [SUBJECT], [SUBJECT DETAILS], [POSE OR ACTION], [FRAMING], [SETTING/BACKGROUND], [LIGHTING], [CAMERA ANGLE], [CAMERA PROPERTIES]" All images were highres and manually cropped using the same ratios as original SDXL was trained on.

    Second run for 1.1 was done on 155 images, including the original ones to introduce more variety and fix the grain issues on 1.0. Same cropping, this time with automated captions using a custom script, making BLIP2 generate a caption as described above. Just needed some cleaning afterwards. Run for ~4 hours.

    svgeekAug 2, 2023

    @talmendo thanks for reply, I’ll process and annotate some picture, if you are interested maybe you would add it to your training dataset 😉

    MachineMindedAug 3, 2023

    @talmendo Very cool - are you able to share the values you used for each of your placeholders? I'd like to continue training in this format...

    talmendo
    Author
    Aug 3, 2023· 4 reactions

    @MachineMinded I used the template by PromptGeek, so all credit to him regarding the captioning system. You can find details here I spent over 100 hours researching how to create photorealistic images with Stable Diffusion - here's what I learned - FREE Prompt Book, 182 Pages, 300+ Images, 200+ Prompt Tags Tested. : StableDiffusion (reddit.com) - I made it just a little simpler one example would be:
    [candid/professional/selfie], photo of a [woman/man] with [(very) long/short] [ginger/blonde/brunette] hair, [more features like clothing], [action/pose of subject], [close up/upper body shot/full body shot], [location], [natural/studio/soft lighting], [from front/below/side]

    harp357100Aug 3, 2023

    @talmendo Can you give an example for the captions?

    Something like "professional photo of a dog, with a fluffy tail and black spots, standing on its hind legs, (no idea what you mean by framing), on a beach, with bright morning sunlight, from the side, with a canon blabla?

    MachineMindedAug 4, 2023

    @talmendo Did you use blip2's visual question/answering to make it generate a particular template?

    talmendo
    Author
    Aug 4, 2023· 2 reactions

    @MachineMinded Yes, exactly, I put the code on Github for anyone interested https://github.com/Talmendo/blip2-for-sd - Not amazing and needs some manual cleaning afterwards, but works well enough

    MachineMindedAug 4, 2023

    @talmendo This is awesome - thanks for sharing. I started messing with it last night after you mentioned you were able to use BLIP-2 to generate a certain template. I wasn't using it with diffusers though, yours is much better. I'm going to see if I can take your model and continue to train it with my own dataset.

    MachineMindedAug 4, 2023

    @talmendo Did you do a fine tune of the model or did you train a LoRA and merge it in?

    talmendo
    Author
    Aug 10, 2023

    @MachineMinded It's a fine tune

    johnsmitAug 3, 2023· 3 reactions
    CivitAI

    do you use the refiner?

    talmendo
    Author
    Aug 3, 2023

    The refiner is really hit or miss, so only sometimes. It tends to not work well for NSFW images, if it does, then only at low denoise strength 0.1 - 0.2

    border2fan369Aug 3, 2023
    CivitAI

    Can I use this model with a1111?

    talmendo
    Author
    Aug 3, 2023

    Yes! Works the same as SDXL

    olternautAug 3, 2023· 5 reactions
    CivitAI

    Did you merge the refiner in this model as well? I really can't understand why they released the refiner separately and not bake it directly into 1.0

    talmendo
    Author
    Aug 3, 2023· 1 reaction

    It's not merged no, you have to use it seperately if you want it. Works well without though

    Mech4nimaLAug 3, 2023· 4 reactions

    in my experience the refiner only makes sense together with the base model at this point in time. with the trained models I dont miss the refiner..

    95185Aug 3, 2023
    CivitAI

    my results look weirdly pixelated. does anyone know why this might be?

    MachineMindedAug 4, 2023

    Are you using the default 1.0 VAE? 0.9's VAE looks a little better.

    95185Aug 4, 2023
    MachineMindedAug 4, 2023

    @alex357 I would download the SDXL 0.9 VAE and use it, or find a model with that VAE baked into it.

    WZ_GenAug 4, 2023

    Same here, even with 0.9 VAE. Using Tiled VAE also makes it slightly worse

    VMAAug 4, 2023

    Just put it on automatic

    kola84Aug 5, 2023· 1 reaction

    I've found this effect happens when rendering lower than 1024 with this model

    talmendo
    Author
    Aug 5, 2023· 1 reaction

    Lower than 1024 is generally not recommended for SDXL, this isn't different with this model. I recommend using the resolutions both SDXL and this model were trained on, mainly 1024x1024, 896x1152 and 768x1344

    WZ_GenAug 7, 2023

    @talmendo Even at native 1024x1024 it looks bad for me. Not exactly pixelated, but more like the colors are dithered. This is trying with your included VAE and other SDXL VAEs as well.

    WZ_GenAug 7, 2023

    @talmendo Looks like Tiled VAE Fast Encoder was the issue for me, turning that off seems to help

    DK7Oct 15, 2023

    @WZ_Gen where do you turn it off?

    groovykool129Aug 9, 2023· 4 reactions
    CivitAI

    TAN LINES!!. How can I get rid of the tan lines that are in nearly all results???

    MachineMindedAug 9, 2023

    I honestly think this is some kind of bias in SDXL. I created a NSFW fine tune and only a few images of mine have tan lines, but sometimes I get them even if I didn't ask. Negative prompt usually helps but my images were tagged as such.

    talmendo
    Author
    Aug 10, 2023

    "tan lines" in the neg prompt should do it. All training images with them were tagged as such, so should be easy enough to remove them.

    ice_flySep 13, 2023

    I actually find it's a remnant of underwear. use [underwear] in the negative

    groovykool129Oct 24, 2023

    @talmendo Does not work.

    groovykool129Oct 24, 2023

    @ice_fly Does not work. 

    spicywangzAug 17, 2023
    CivitAI

    Trying to run on Draw Things app on an M1 Mac.

    It works when I first import the model, but as soon as I quit and restart the app, it will crash anytime I try to run the model

    talmendo
    Author
    Aug 18, 2023

    Hey, that's almost impossible to be this model specifically. Does normal SDXL work for you? There's no difference between how those run.

    Vega2113Aug 28, 2023

    Try using comfyUI if it works on Mac. ComfyUI better manages RAM and VRAM

    PhoeniixfireJun 9, 2024

    Weird, but I had the same issue… however it has also popped up on some other SDXL base models for me. Especially if I try to run any LoRAs! Actual SDXL1.0 seems to be more stable.

    Drawthings on an M2 MacBook

    larrybobSep 3, 2023
    CivitAI

    I'm able to produce images with this model in the stable-diffusion-webui. But I can't seem to fine tune with DreamBooth. I get this error when I try to load the model into DreamBooth:

    Extracting checkpoint from /mnt/2TB_SSD/stable-diffusion-webui/models/Stable-diffusion/talmendoxlSDXL_v11Beta.safetensors [2023-09-03 11:53:32,563][DEBUG][dreambooth.dataclasses.db_config] - Saving to /mnt/2TB_SSD/stable-diffusion-webui/models/dreambooth/EESDXL Something went wrong, removing model directory Traceback (most recent call last): File "/mnt/2TB_SSD/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/sd_to_diff.py", line 157, in extract_checkpoint pipe = download_from_original_stable_diffusion_ckpt( File "/home/larry/anaconda3/envs/ldm/lib/python3.9/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1370, in download_from_original_stable_diffusion_ckpt set_module_tensor_to_device(unet, param_name, "cpu", value=param) File "/home/larry/anaconda3/envs/ldm/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 255, in set_module_tensor_to_device new_module = getattr(module, split) File "/home/larry/anaconda3/envs/ldm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'ModuleList' object has no attribute '1' Couldn't find /mnt/2TB_SSD/stable-diffusion-webui/models/dreambooth/EESDXL/working/unet Unable to extract checkpoint! Duration: 00:00:01

    I'm running on a 4090.

    talmendo
    Author
    Sep 6, 2023

    Most definitely a problem with the extension, not the model. I would recommend trying their github for support. Make sure it supports SDXL

    winonedaySep 10, 2023
    CivitAI

    doesnt work with 8 GB Nvidia card (3070) too bad

    cuda error

    AshlynSep 10, 2023

    Most likely some setting thats not right, works good on my 8 GB 1070

    talmendo
    Author
    Sep 11, 2023

    This is just a finetune of SDXL, so if that runs for you, this will too.

    okappix100Dec 23, 2023

    work great same conf

    ulisse6000001Sep 19, 2023
    CivitAI

    How can I load this model using huggingface diffuser library?

    DolotboyJun 20, 2024

    Any answer ?

    DJHILMAROct 13, 2023· 2 reactions
    CivitAI

    Hasta ahora no me convence SDXL, desenfoca demasiado los fondos

    omeuszDec 4, 2023· 4 reactions
    CivitAI

    sorry for my english - for my purpose this model is genius. thank you

    Checkpoint
    SDXL 1.0

    Details

    Downloads
    45,637
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/2/2023
    Updated
    5/13/2026
    Deleted
    -

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.