CivArchive
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    Daily Release Campaign has ended!

    See here for more info: URPM is back - Daily Releases incoming!

    💖Join the Patreon if you want to support our work & even earlier access to models being beta or alpha tested!

    🗣️Join the URPM Discord for Updates or Support!
     ---
    My SDXL and Pony-SDXL Hybrid models are now available here: https://civarchive.com/models/790652/uber-realistic-porn-merge-ponyxl-hybrid-or-xl-and-pony-loras-or-controlnet

    Example Images: 
     

    None of the sample images were altered, upscaled, etc. All can be reproduced using the metadata via the unpruned model (however the pruned model should now have the same results).

    Prompting help:
     

    If you notice that it’s not doing what it should do, be extremely light with the negative prompt. And then I re-use the same seed and add more words when needed.

    Liability:

    In no event shall I or my team be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the use of this model. Please render responsibly.

    Description

    This will hopefully be the last 1.x version. I am now fully focused on 2.x which includes new model merges.

    Changes:

    • Included VAE into the models. It is no longer a separate requirement

    • Fixed the pruned model! The pruned version is also less excessively pruned and now has the exact same results to the unpruned file as it should (unless you are using xformers which gives you slightly different results each time).

    • All of the ratios are different. See model recipe below.

    • Replaced SXD-Berrymix 1.0 with just SXD 1.0

    • Updated 3DKX 1.0b to 1.1 within the mix

    • Smaller model file. From 8gb to 6.3gb. This is thanks to removing berrymix.

    Here is the model recipe now:

    4 Add Difference merges in total (with Model A always in the Tertiary position as well), with a VAE injected:

    1st Merge:

    izumi
    sxd-1.0 at 15%

    2nd Merge:

    ^above
    ZombiMix-v7 at 40%

    3rd Merge:

    ^above
    3DKX_1.1 at 20%

    4th Merge:

    ^above
    RealEldenApocalypse_AnalogSexKnoll_4CandyPureSimp+FEET at 35%

    VAE:

    Injected VAE into the model: https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt

    FAQ

    Comments (260)

    bhaveek424191Jan 12, 2023· 8 reactions
    CivitAI

    How can I run this using Google colab?

    79245Jan 12, 2023
    CivitAI

    no nipple piercings though, great work!

    saftle
    Author
    Jan 12, 2023

    I'll see if i can look into that for a v.2 release :)

    stablydiffusedJan 12, 2023
    CivitAI

    Has anyone else tried version 1.2 in InvokeAI? I'm trying it with the .ckpt file and the results I'm getting do not follow prompts at all. For example, prompting for 'beautiful woman with blonde hair and green eyes' produces a heavily artifacted oil painting of a dilapidated farm house, and other things that don't match any terms from the prompt. I think it's a bug in Invoke as the model works fine in Automatic1111, just trying to see if anyone else is having the same issue so I can hopefully track down what's causing it and report it to the InvokeAI devs.

    saftle
    Author
    Jan 12, 2023

    Oh interesting. Perhaps the VAE is still required separately in InvokeAI. Are you using any VAE files in Automatic1111?

    stablydiffusedJan 12, 2023

    I did not use the VAE in Automatic1111. I've only tried the unpruned .ckpt in InvokeAI, but I tried both with and without the VAE and got the same result. I've seen this issue occur before with some other merges I've made, but haven't been able to nail down specifically what causes it. Seems to happen more frequently when doing 3 model merges with the add difference option, but looking at your recipe it doesn't look like you used any, so I'm stumped.

    stablydiffusedJan 12, 2023

    FWIW, I tried the pruned .ckpt file in InvokeAI and it works fine. So it is just the unpruned version that I'm having an issue with.

    stanleyspencerJan 20, 2023

    Hi Stably! I'm having the same issue. I'll try the pruned .ckpt next and if it doesn't work, I'll be back...

    stablydiffusedJan 20, 2023

    I haven't had any issues with the pruned one on InvokeAI.

    104450Jan 21, 2023

    Pruned works fine for me as well. For some reason, InvokeAI doesn't like the unpruned version, so I switched and it started working fine.

    NikeTakeuchiJan 12, 2023
    CivitAI

    Is there any way to get thicker eyebrows or more natural eyebrows? it looks very painted on no matter what prompts I use for blonde hair.

    Inpainting/Merge recommendations?

    saftle
    Author
    Jan 14, 2023· 2 reactions

    I released an in-painting model. I hope it helps. In regards to eyebrows, I haven't tried prompting differently for that. If anyone knows, please let us know.

    apo1sJan 13, 2023
    CivitAI

    whats the difference with unpruned or pruned ??

    saftle
    Author
    Jan 13, 2023

    Specific pruning methods removes thing that are unnecessary, but could still be beneficial for other purposes. However the pruning method I am using now, only removes junk. There should be no difference between the two, but when merging models I suggest merging the unpruned version just in case.

    gorricJan 14, 2023
    CivitAI

    1.2 doesn't follow prompts in NMKD GUI,

    saftle
    Author
    Jan 14, 2023

    Have you compared UIs to see if it is only in that UI?

    Also, be extremely light with your negative prompt and try only adding one word at a time. I would also start with my example prompt in the description.

    LuzifersohnJan 14, 2023

    For me it works. Also NMKD GUI. As OP said, just be more precise on the wording and get some Prompt input from some of the pictures here maybe.

    gorricJan 15, 2023

    No negative prompts, just keeps giving pictures of family photos kind of things, wallpapers, someone riding on a bike, cabin, messy bedroom. all real photos, in raw format. weird!

    saftle
    Author
    Jan 15, 2023

    That is super strange. It almost sounds like it's not using my model at all.

    efraxJan 19, 2023

    "it works now (with the 1.2v) ^^ just convert the .safetensors file back to .ckpt in the NMKD GUI under "Developer Tools", place it in your "models" folder and select "None" under VAEs."

    BdarangoJan 14, 2023· 4 reactions
    CivitAI

    I'm getting an error "size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape

    torch.Size([320, 9, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 4, 3,

    3])." when loading the inpainting model. What should I do? All of the other models work except this inpaint.

    saftle
    Author
    Jan 14, 2023

    This happens when you aren't using the in-painting config. Check the 1.5-inpainting model for reference.

    sthJan 14, 2023

    i have same problems loading the inpainting model as well, the other ones from latest ver (1.2) work without any problem, also the vanilla sd1.5 and its inpainter work fine

    i think the important part of the log would be around this:
    webui-docker-invoke-1 | File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
    webui-docker-invoke-1 | raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    webui-docker-invoke-1 | RuntimeError: Error(s) in loading state_dict for LatentInpaintDiffusion:
    webui-docker-invoke-1 | size mismatch for model_ema.diffusion_modelinput_blocks00weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 9, 3, 3]).

    more details below

    i'm using InvokeAI and referencing the config from SD1.5-inpainter for this inpainter, paths are good, but don't seem to make it work :(

    some logs extract below

    when trying from UI Model Manager on InvokeAI 2.2.5
    webui-docker-invoke-1 | >> New Model Added:
    uberRealisticPornMerge_urpmv12Inpainting-inpainting
    webui-docker-invoke-1 | >> Model change requested:
    uberRealisticPornMerge_urpmv12Inpainting-inpainting
    webui-docker-invoke-1 | >> Current VRAM usage: 2.17G
    webui-docker-invoke-1 | >> Offloading sd1.5-UBRPM-pruned-1.2 to CPU
    webui-docker-invoke-1 | >> Scanning Model:
    uberRealisticPornMerge_urpmv12Inpainting-inpainting
    webui-docker-invoke-1 | >> Model Scanned. OK!!
    webui-docker-invoke-1 | >> Loading uberRealisticPornMerge_urpmv12Inpainting-inpainting from /data/StableDiffusion/uberRealisticPornMerge_urpmv12Inpainting-inpainting.ckpt
    webui-docker-invoke-1 exited with code 137

    when trying to bake it in as the default model to load at boot-up, i get the below:
    webui-docker-invoke-1 | + python3 -u scripts/invoke.py --web --host 0.0.0.0 --port 7860 --config /docker/models.yaml --root_dir /stable-diffusion --outdir /output/invoke --no-nsfw_checker--safety_checker --free_gpu_mem
    webui-docker-invoke-1 | >> Initialization file /stable-diffusion/invokeai.init found.
    Loading...
    webui-docker-invoke-1 | * Initializing, be patient...
    webui-docker-invoke-1 | >> InvokeAI 2.2.5
    webui-docker-invoke-1 | >> InvokeAI runtime directory is "/stable-diffusion"
    webui-docker-invoke-1 | >> GFPGAN Initialized
    webui-docker-invoke-1 | >> CodeFormer Initialized
    webui-docker-invoke-1 | >> ESRGAN Initialized
    webui-docker-invoke-1 | >> Using device_type cuda
    webui-docker-invoke-1 | >> Current VRAM usage: 0.00G
    webui-docker-invoke-1 | >> Scanning Model: default
    webui-docker-invoke-1 | >> Model Scanned. OK!!
    webui-docker-invoke-1 | >> Loading default from /data/StableDiffusion/uberRealisticPornMerge_urpmv12Inpainting-inpainting.ckpt
    webui-docker-invoke-1 | | Forcing garbage collection prior to loading new model
    webui-docker-invoke-1 | | LatentInpaintDiffusion: Running in eps-prediction mode
    webui-docker-invoke-1 | | DiffusionWrapper has 859.54 M params.
    webui-docker-invoke-1 | | Keeping EMAs of 688.
    webui-docker-invoke-1 | | Making attention of type 'vanilla' with 512 in_channels
    webui-docker-invoke-1 | | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
    webui-docker-invoke-1 | | Making attention of type 'vanilla' with 512 in_channels
    webui-docker-invoke-1 | ** model default could not be loaded: Error(s) in loading state_dict for LatentInpaintDiffusion:
    webui-docker-invoke-1 | size mismatch for model_ema.diffusion_modelinput_blocks00weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 9, 3, 3]).
    webui-docker-invoke-1 | Traceback (most recent call last):
    webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/model_cache.py", line 81, in get_model
    webui-docker-invoke-1 | requested_model, width, height, hash = self._load_model(model_name)
    webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/model_cache.py", line 249, in loadmodel
    webui-docker-invoke-1 | model.load_state_dict(sd, strict=False)
    webui-docker-invoke-1 | File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
    webui-docker-invoke-1 | raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    webui-docker-invoke-1 | RuntimeError: Error(s) in loading state_dict for LatentInpaintDiffusion:
    webui-docker-invoke-1 | size mismatch for model_ema.diffusion_modelinput_blocks00weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 9, 3, 3]).
    webui-docker-invoke-1 |
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    webui-docker-invoke-1 | You appear to have a missing or misconfigured model file(s).
    webui-docker-invoke-1 | The script will now exit and run configure_invokeai.py to help fix the problem.
    webui-docker-invoke-1 | After reconfiguration is done, please relaunch invoke.py.
    webui-docker-invoke-1 | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    webui-docker-invoke-1 | configure_invokeai is launching....
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | Loading Python libraries...
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | Welcome to InvokeAI. This script will help download the Stable Diffusion weight files
    webui-docker-invoke-1 | and other large models that are needed for text to image generation. At any point you may interrupt
    webui-docker-invoke-1 | this program and resume later.
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | DOWNLOADING DIFFUSION WEIGHTS
    webui-docker-invoke-1 | You can download and configure the weights files manually or let this
    webui-docker-invoke-1 | script do it for you. Manual installation is described at:
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | https://invoke-ai.github.io/InvokeAI/installation/020_INSTALL_MANUAL/
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | You may download the recommended models (about 10GB total), select a customized set, or
    webui-docker-invoke-1 | completely skip this step.
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]:
    webui-docker-invoke-1 | A problem occurred during initialization.
    webui-docker-invoke-1 | The error was: "EOF when reading a line"
    webui-docker-invoke-1 | Traceback (most recent call last):
    webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/model_cache.py", line 81, in get_model
    webui-docker-invoke-1 | requested_model, width, height, hash = self._load_model(model_name)
    webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/model_cache.py", line 249, in loadmodel
    webui-docker-invoke-1 | model.load_state_dict(sd, strict=False)
    webui-docker-invoke-1 | File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
    webui-docker-invoke-1 | raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    webui-docker-invoke-1 | RuntimeError: Error(s) in loading state_dict for LatentInpaintDiffusion:
    webui-docker-invoke-1 | size mismatch for model_ema.diffusion_modelinput_blocks00weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 9, 3, 3]).
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | During handling of the above exception, another exception occurred:
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | Traceback (most recent call last):
    webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/CLI.py", line 122, in main
    webui-docker-invoke-1 | gen.load_model()
    webui-docker-invoke-1 | File "/stable-diffusion/ldm/generate.py", line 826, in load_model
    webui-docker-invoke-1 | self.set_model(self.model_name)
    webui-docker-invoke-1 | File "/stable-diffusion/ldm/generate.py", line 851, in set_model
    webui-docker-invoke-1 | model_data = cache.get_model(model_name)
    webui-docker-invoke-1 | File "/stable-diffusion/ldm/invoke/model_cache.py", line 92, in get_model
    webui-docker-invoke-1 | assert self.current_model,'** FATAL: no current model to restore to'
    webui-docker-invoke-1 | AssertionError: ** FATAL: no current model to restore to
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | During handling of the above exception, another exception occurred:
    webui-docker-invoke-1 |
    webui-docker-invoke-1 | Traceback (most recent call last):
    webui-docker-invoke-1 | File "/stable-diffusion/scripts/configure_invokeai.py", line 780, in main
    webui-docker-invoke-1 | errors.add(download_weights(opt))
    webui-docker-invoke-1 | File "/stable-diffusion/scripts/configure_invokeai.py", line 597, in download_weights
    webui-docker-invoke-1 | choice = user_wants_to_download_weights()
    webui-docker-invoke-1 | File "/stable-diffusion/scripts/configure_invokeai.py", line 127, in user_wants_to_download_weights
    webui-docker-invoke-1 | choice = input('Download <r>ecommended models, <a>ll models, <c>ustomized list, or <s>kip this step? [r]: ')
    webui-docker-invoke-1 | EOFError: EOF when reading a line
    webui-docker-invoke-1 |

    below is an extract of how i defined the model-obj in the models.yaml (ignore bad yaml format over here as this textbox cannot handle it ^^ )
    sd1.5-UBRPM-inpainting-1.2:

    description: SD1.5-uberRealisticPornMerge 1.2 inpainting
    weights: /data/StableDiffusion/uberRealisticPornMerge_urpmv12Inpainting-inpainting.ckpt
    vae:
    config: ./configs/stable-diffusion/v1-inpainting-inference.yaml
    width: 512
    height: 512
    default: true

    @saftle any idea what am i missing around here?

    o_0 any other pre-req or setup i'm missing ?!

    thank you in advance for looking into this and keep up the great work you are doing with this, congrats!

    saftle
    Author
    Jan 14, 2023

    That is interesting. No, nothing that I'm aware is missing. It might be an InvokeAI incompatibility in general. I'll see what I can find, but it might be worth opening up an issue on their end for now.

    socratessoupJan 19, 2023

    @saftle You reference using the in-painting config. I am able to use the 1.5 inpainting model. So I wasn't sure if that is the issue or there is something else.

    saftle
    Author
    Jan 19, 2023

    @socratessoup oh in that case, you just need to copy the config that is in the same folder, and rename it to the same name as URPM in your folder.

    Ac1dBurnJan 21, 2023

    Just a FYI if you have any embeddings and you invoke them on accident you will get this error also

    hs244Feb 9, 2023

    I had this problem, I found that just renaming the model with a smaller name (and of course the renaming the config to the same) got rid of the problem for me.

    djhamJan 14, 2023
    CivitAI

    It's interesting how "in a cloudpunk city" makes the people look more realistic with this model. Once I take that away in the prompt, it's less photorealistic to me in my experience.

    saftle
    Author
    Jan 14, 2023· 1 reaction

    Try adding ((detailed facial features)) to the prompt. It greatly increases the quality of faces :)

    djhamJan 15, 2023· 1 reaction

    I guess maybe I was wrong all along, because I went back to test out the v1 version to see. And maybe I never needed "in a cloudpunk city" in the prompt.

    clevnumbJan 18, 2023
    CivitAI

    Can someone explain, do I need to do the merge steps in the instructions and if so, what is "with Model A always in the Tertiary position as well" What is model A?

    pent22Jan 19, 2023· 2 reactions
    CivitAI

    Where are the tags that we can use?

    golpntorrJan 19, 2023
    CivitAI

    Someone can tell me how it is installed??? Thank you. :)

    VSCUMFeb 25, 2023

    no lol google it

    CormacJan 19, 2023
    CivitAI

    All versions in DiffusionBee *.ckpt:

    Error Traceback (most recent call last):

    File "convert_model.py", line 28, in <module>

    KeyError: 'state_dict'

    [1283] Failed to execute script 'convert_model' due to unhandled exception!

    saftle
    Author
    Jan 19, 2023

    Do other merged models work? This could be a problem that it doesn't support Automatic1111 merged files. InvokeAI had to be updated for it as well.

    lolJan 20, 2023

    This has been my experience with DiffusionBee, it doesn't play nicely with models generated by Automatic1111.

    saftle
    Author
    Jan 20, 2023· 1 reaction

    @lol Ah okay, in that case, it can be patched really easily. I had to do the same for a few programs I was using, by using the invoke's code change. This is the fix they did in Invoke: https://github.com/invoke-ai/InvokeAI/pull/1766/commits/17161fa0e06f53160abee8b9ac00f16ec11e50be

    lolJan 20, 2023

    @saftle That's super handy, thanks! I'm running out of my colab processing units so this will be awesome. Thanks!

    mackmurdocJan 21, 2023

    @saftle how do we use it? i clicked the link and im pretty lost. im also a diffusion Bee user and having 40% of models not work has been a disapointment

    saftle
    Author
    Jan 21, 2023

    @mackmurdoc It means thats DiffusionBee would need a similar patch. I recommend opening up an issue on their github (if they have one), and referencing the patch I linked.

    gammarecall621Jan 25, 2023

    @mackmurdoc I seriously recommend just switching to the 1111 gui. There is a tiny learning curve but it's much more flexible.

    PineconeJan 19, 2023
    CivitAI

    Amazing model, thank you! One question though, is it possible to prevent or reduce the chance that it will create 1 or more uncovered fingers/tips/nails when using prompts that include gloves or its variations (leather, latex, etc)?

    A pretty high percentage of outputs suffer from that. I could probably fix with inpainting but I'm not good at it yet or it just doesn't work well.

    Just looking for any tips that might help or hope that v2 training tunes it more accurately with those prompts.

    villianceJan 20, 2023
    CivitAI

    what is the difference between SafeTensor and PeackleTensor?

    saftle
    Author
    Jan 20, 2023· 2 reactions

    Safetensor can't have pickle exploits. It's just a safer alternative, but if you trust an author you can usually use both. However, just in case an author unknowingly has a virus, I would still grab a safetensor.

    villianceJan 21, 2023· 1 reaction

    @saftle Fucking dude, you're damn right! You have no idea how much you helped me, it doesn't say anywhere that Pickletensor is ckpt !)

    ezeferinoJan 20, 2023
    CivitAI

    I want to retrain this model as stableDiffusion1.5, with an input dataset, but I am getting an error in the conversion. Any solution? I am using the fast-DreamBooth.ipynb notebook from TheLastBen on https://github.com/TheLastBen/fast-stable-diffusion

    saftle
    Author
    Jan 20, 2023

    I see you already joined my patreon and the discord. I will help you there :)

    goldmanJan 21, 2023· 2 reactions
    CivitAI

    Using both 1.1 and 1.2 models and loving them, however I'm not able to get the 'inpainting' model to load, using InvokeAI latest version and can get it loaded up in the model manager but the terminal tells me it 'won't load' when I try to use it and sends me back to the model I was using before trying...

    Any ideas ??

    As far as I can see I have the config yaml right ??

    saftle
    Author
    Jan 21, 2023

    Did you provide the in-painting inference yaml in the model manager edit page for the model? You have to add it separately or else it won't work.

    It's also possible that it does not work in InvokeAI at all. Take a look at the following issue: https://github.com/invoke-ai/InvokeAI/issues/2314

    goldmanJan 21, 2023· 1 reaction

    @saftle I did originally but still no go, now I have pointed the model manager to the original in-painting yaml from SD's default in-painting model instead of the one you supplied and it works that way...🤔 Seems to be giving me better results that the default in-painting model so I'm guessing it's all sorted....Cheers for the great work on these models..👍🏻

    inthegarden333142Jan 22, 2023

    @goldman can you explain how you did this? I am having the same issue

    goldmanJan 22, 2023

    @paininperfection pointing to the default inpainting yaml in the normal config folder, not using the downloaded one supplied...loading up and working a treat...

    Can't seem to load an image to show you sorry..

    inthegarden333142Jan 22, 2023

    @goldman so do you mean you are putting the uber inpainting ckpt in the config folder? does it not get installed like a normal model does?

    goldmanJan 24, 2023

    @paininperfection nope, the ckpt goes in the model folder and you point the model manger in Invoke to that folder, I am not using the 'inpainting yaml' offered with the ckpt, I'm pointing the model manager to the default 'inpainting yaml' in the stable-diffusion folder inside the config folder and using the default yaml...

    alleschweineJan 21, 2023· 2 reactions
    CivitAI

    I don't know why but it seems like only the pickle version does seem to work for me... safetensor only spits out weird anime style images. i'm fairly new to this, so is there a reason why it acts like that? (using a1111)

    saftle
    Author
    Jan 21, 2023

    Are you using 1.2?

    alleschweineJan 21, 2023

    @saftle Yes been using 1.2... i did deactivate other vae, like i said the ckpt does work like it should but not the safetensor for some weird reason, i am using the newest a1111 version and also put SAFETENSORS_FAST_GPU=1 in the webui-user.bat (don't know if this is even still necessery)... so i'm not really sure why the safetensor doesn't work like it should

    saftle
    Author
    Jan 21, 2023

    Yeah, that is super strange, considering I just use the safetensors version. Are you up-to-date? Did you do a git pull recently?

    borishoJan 21, 2023
    CivitAI

    I am getting blurry images. Could anyone help me to fix it ?

    M401Jan 24, 2023· 1 reaction

    Try to put Negative prompts like: Blurry, low quality, low resolution, poorly drawn faces, poorly drawn fingers, grain, jpeg artifacts

    NikeTakeuchiFeb 1, 2023

    Add to your prompt: ((4k, 8k, HDR, canon, masterpiece, RAW photo))

    You can also upscale through Hi-resfix or img2img - SD upscale fix (if you have "4x foolhardy Remacri" upscaler that would be best)

    hitem with "iphone 27 pro max hd portrait photo mode" and got some good stuff lol

    jeverdeen1Jan 23, 2023
    CivitAI

    Having trouble getting inpainting working with Draw Things. After adding the model and config to the import folder, then okaying the import, the application crashes once the progress bar completes. Any advice appreciated!

    saftle
    Author
    Jan 23, 2023· 1 reaction

    Certain UIs may not work with custom inpainting models, from what I see. The best thing you can probably do is open an issue on their side to have it fixed. Automatic1111 custom model merges seem to have problems with most UIs until they fix them, and then the custom merged inpainting is another fix on top most likely.

    jeverdeen1Jan 23, 2023

    @saftle Thanks! Otherwise, great work.

    GzuzJan 24, 2023· 12 reactions
    CivitAI

    Can I use this on my cellphone?

    saftle
    Author
    Jan 24, 2023

    I'm not sure, you'd need a compatible Stable Diffusion program on your phone. Definitely let me know if you find a way!

    uniblabFeb 3, 2023· 1 reaction

    Free App called Draw Things for iOS/iPadOS and macOS on Apple Silicon…

    HeknzFeb 7, 2023

    You can run it with GColab on your phone

    AndrewJackJan 24, 2023· 2 reactions
    CivitAI

    Can someone convert this to diffusers? Trying to train it on Dreambooth and keep getting conversion errors.

    Idk how to fix :(

    saftle
    Author
    Jan 24, 2023· 4 reactions

    I created a Bleeding Edge build which fixes it, but you could technically do the following to fix it as well:

    Merge with F222 in the following way:

    Model A: F222
    Model B: URPM
    Weighted Sum
    Multiplier 0.999

    That should fix it! Otherwise, you'd probably need to wait for my next version, or you can grab my super WIP build as a Bleeding Edge patron.

    SnowBall_CatMar 3, 2023· 1 reaction

    @saftle goat

    saftle
    Author
    Mar 4, 2023

    Or even better, grab 1.3 here :) https://huggingface.co/saftle/urpm

    ChootavaoJan 24, 2023
    CivitAI

    Has anyone gotten it to work with NMKD?

    saftle
    Author
    Jan 24, 2023

    Yep, I heard the pruned CKPT (pickle tensor) works just fine. Click on the down arrow to the right of the version to see all files.

    ChootavaoJan 24, 2023· 1 reaction

    @saftle it said it failed to load... Then it went ahead and generated some pretty good images.

    So far so good.

    ChootavaoJan 25, 2023

    @saftle I never actually got it to work with nmkd, I'm pretty sure after it failed it defaulted back to a prior model. Still stuck but hopeful

    SirVilhelmJan 24, 2023· 6 reactions
    CivitAI

    Don't try to force nipple rings lol

    Cammy_JovianMar 11, 2023

    Button noses neither; I had a couple where the nipples were replaced by buttons, like wtf.

    gsgsdgJan 24, 2023
    CivitAI

    The 1.2 is currently unable to download, XML error.

    excitedhelicopter308Jan 24, 2023· 2 reactions

    Worked for me just now.

    gsgsdgJan 25, 2023

    @exciheli now it's working!

    okaycoolsweetJan 25, 2023
    CivitAI

    hey there, love the work you've put in for URPM but i'm having an issue loading the inpainting model, every time i get an error along the lines of "TypeError: DDPM.__init__() got an unexpected keyword argument 'finetune_keys' " which crashes auto1111.

    have the .yaml file in my /models folder alongside the inpainting model and they both have the same name, not too sure what the issue is but i'm not that savvy on anything python

    saftle
    Author
    Jan 25, 2023· 1 reaction

    Interestingly enough, I don't have this issue, but it looks like it is a bug and there is a workaround in this issue thread: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3492

    okaycoolsweetJan 25, 2023· 1 reaction

    @saftle this seems to have fixed my issue as it's properly loading the model now

    thanks a ton for the link saftle, keep up the amazing work!

    eddyz1122Jan 26, 2023· 2 reactions
    CivitAI

    Your tags are crap and misleading.

    saftle
    Author
    Jan 26, 2023· 2 reactions

    Do you by chance have clip skip accidentally set to 2? All of my images are reproducible with default settings when you drag any of the images into the PNG Info tab.

    Also if you are using xformers, you will not be able to reproduce images of others that are not using xformers, due to xformers always creating variations of images.

    If you mean site tags, I chose whatever I thought made sense. However, a new site update is coming where users can define the tags themselves for all content, since users know best how to use a model more than the authors themselves most of the time.

    AerotheoneJan 26, 2023· 4 reactions
    CivitAI

    Hi, On a mac, on diffusionbee, I cant import models that i download from here. What am i doing wron the tensorsafe file i download appears grayed out and unselectable when i add model. Pls help.

    vimekixJan 30, 2023

    Same here

    saftle
    Author
    Jan 30, 2023

    Most likely DiffusionBee needs a similar patch that InvokeAI had to do in order to support Automatic1111 merged models. I suggest opening up an issue on their end, and relating the commit to the code that InvokeAI had to do: https://github.com/invoke-ai/InvokeAI/pull/1766/commits/17161fa0e06f53160abee8b9ac00f16ec11e50be

    I hope this helps! :D

    paradisebuster44Jan 26, 2023
    CivitAI

    Where do I find the inpainting model?

    saftle
    Author
    Jan 26, 2023

    Further down the page, you'll see that it is a separate version. Just click it and then download on the right.

    LibertyJan 29, 2023· 6 reactions
    CivitAI

    It's a cool model, except it's a shame it generates guys with female genitalia. I get a man and a woman and both have a vagina :)

    saftle
    Author
    Jan 29, 2023· 4 reactions

    Haha yeah, that is one of the things I've been working on in the next version. Should be less often soon :P

    robot1meJan 29, 2023· 1 reaction

    @saftle That's awesome news! I had tested around with merging and furry-based models, and you wouldn't believe how good it works. Despite that I don't focus on the female bits. But it's things like enhanced character clarity, details and correct drawing with the lewd parts. You have a really solid foundation with this model, thanks for not letting it go yet 👍

    realharrydaniels178Jan 30, 2023
    CivitAI

    Has anyone had trouble downloading the model? I've downloaded plenty of models before but this one is stuck infinitely loading for some reason. Any help would be greatly appreciated!

    roboddJan 31, 2023

    download the safetensor version. rename it with extension .safetensors then use that one instead of the ckpt version

    mark46Jan 31, 2023
    CivitAI

    Gone through everyone's photos and tried everyone's photos and all I get are trees and the forest. Not once have I gotten anywhere near what you all get.

    saftle
    Author
    Jan 31, 2023

    What is your clip set to? Also are you using xformers? And which UI are you using?

    DevonAJan 31, 2023· 1 reaction

    I have had the same problem in InvokeAI with the non pruned version.
    It created a misty foggy forest like images.
    Then it tried the pruned ckpt version and it works normal as you'd expect.

    The only thing that bugs me, is the prompt being so extremely sensitive with nsfw stuff.
    It always wants to avoid generating nsfw (especially 'hardcore') unless I'm being very aggressive with the prompt (-weights). With a long very description it very likely to have something in there that prevents generating nsfw content. For example lets take "eye contact" at the end of the positive prompt makes it switch to the face only instantly. So I have to dial the weight down more: "eye contact)--". (nsfw checker is off)

    mark46Jan 31, 2023

    @saftle I've tried everything and nothing works at all. No matter what I do. Out of the 200 images I tried I got 1 or 2 that were actually human. I've copied the prompts right from here with all the information provided and nothing works. I've since given up and deleted it. Not wasting space for something that is giving me that much trouble. I have noticed more and more models don't support this model NMKD GUI and this is really disappointing. I've not had much problems with it until recently. Don't care about auto111 but if every model is only made for that then what is the point of this page. This information should be included on what it runs on. Or should run on.

    mark46Jan 31, 2023

    @DevonA Try just copying the prompts from here to see if it works. I've tried and I get nothing. Both in InvokeAI and NMKD GUI 1.9

    saftle
    Author
    Jan 31, 2023· 2 reactions

    @mark46 are you using the pruned model? The non-pruned version does not work in InvokeAI for some reason. Also you can never get the exact same results in InvokeAI, if you are trying to copy images that were made in Automatic11111. InvokeAI sets different priorities on prompt words. This is not a limitation of the model itself.

    And in regards to NMKD GUI, it is NOT the models not supporting it. It does not support automatic1111 merged models. InvokeAI had to patch their software as well. I suggest filing a bug report with the UI, since the models themselves are NOT responsible.

    Also I disagree that I need to provide instructions for every UI out there. There are too many, and as you can see above, they have their own bugs and limitations that a model merger/creator can't do anything about.

    Considering the UIs are free and open source and the models are free and open source (at least mine anyways), it is up to you to either provide a pull request to fix the open source code yourself, or to at least report the bug. You can't assume that it'll "just work", since you are not paying for a service. If this is your first time interacting with open source, "pull requests are welcome"!

    mark46Jan 31, 2023

    @saftle Yeah I wasn't directing that right at you, it's in general. I've already talked to the guy and he says he's working on it. It just sucks that they don't all work with every model of SD. They are all based on the same architecture of stable diffusion 1.5 or so. I'm not new to A.I. at all, been using it before it became mainstream a year or two ago. And way before that with gtp1-2 and others. I know it's not your fault for the way it works. Good job either way. And yes I used the pruned model and it wasn't working.

    someone88997Jan 31, 2023
    CivitAI

    Can this model do cunnilingus well? If not, do you plan to include that concept in your future releases?

    saftle
    Author
    Jan 31, 2023· 1 reaction

    It cannot sadly. No model can at the moment, but thankfully I'm working with someone that is training exactly this. This should be made available in a future version :D

    COBatmanJan 31, 2023· 1 reaction

    This is encouraging, I've tried every kiss, lick prompt combo to no avail.

    EZorgFeb 1, 2023
    CivitAI

    Says its incompitable with NMKD Stable Diffusion 1.8.1. No idea what that means tbh

    EZorgFeb 1, 2023

    Tried dev tools - fails to convert. Tried pruned version, thats also incompatible. No idea what I'm doing so just trial and erroring it...

    saftle
    Author
    Feb 1, 2023

    @gr3yh4wk1 models that have been merged with Automatic1111 do not work on NMKD yet it appears. InvokeAI had to fix it as well. You'll probably need to create a ticket on their side. Sadly there is nothing that can be done on the model's side that I'm aware of.

    EZorgFeb 1, 2023

    @saftle Thanks! At least I can stop fiddling..with the file at least ahem

    aweedburneraccountFeb 2, 2023
    CivitAI

    hmm i used this model on mage.space with amazing result. but now, on auto1111 on my own machine, way different results with the same prompts that work great on mage.space. what am i missing?

    saftle
    Author
    Feb 2, 2023

    Huh, had no idea mage.space was using my merge. It could be that you are using a different sampler, steps or a CFG. Or perhaps they are using the hires.fix. It can be alot of things.

    aweedburneraccountFeb 2, 2023· 1 reaction

    @saftle my mistake i thought i commented this on the realistic vision page. little mixup

    AkeynFeb 2, 2023

    @aweedburneraccount I have the same issue, I get different results on auto1111, did you figure out why this is happening?

    @Akeyn right now im under the impression that they are using a different sampling method because on mage.space they dont even reveal it. but im still having the problem. the one on mage is suuuper more detailed than what im getting with the same prompts and models on my home machine . not sure

    HeknzFeb 7, 2023

    interesting, I also have something similar, I was using it on mage.space and don't have good results now I'm running it in Gcolab and images are better.

    65473Mar 14, 2023

    Try it on PirateDiffusion.com instead, it costs less than Mage Space and there's 1TB of models preinstalled and you get a 15 GB cloud drive. Its the best for Stable Diffusion on the go

    AkeynFeb 2, 2023
    CivitAI

    I'm trying to reproduce each of the provided results from the comments using people's settings, but the result is not the same... I used the same hash of the model [fcfaf106f2], all settings are the same: prompt, Negative prompt, Sampling method, Sampling steps, Width, Height, CFG Scale. Hyper hypernetwork is [NONE]. Hash of the AUTOMATIC1111/stable-diffusion-webui [226d840e84c5f306350b0681945989b86760e616]. Do I need to use additional hypernets or set up with weights?

    saftle
    Author
    Feb 2, 2023

    This can happen if they or you are using xformers, since xformers will produce varying results. Also is your CLIP Skip set to 1? That is the default, however that can also be the case that people use a different value.

    AkeynFeb 2, 2023

    @saftle I tried both options without xformers and with xformers and the result is not much different. But "CLIP Skip" was set to 2 by default in all cases. "CLIP Skip" tested with values from 1 to 12, but still does not match with people's results... By the way, thanks for the reply!

    DaceDrgnFeb 7, 2023

    Forgive me if this is a stupid suggestion, but you haven't mentioned seed. Are you using the same seed as the examples?

    AkeynFeb 7, 2023· 1 reaction

    @DaceDrgn Now I'm using the PNG Info tab to get all settings from images, so it's no longer relevant to me.

    msomerFeb 2, 2023
    CivitAI

    Great job. And where are the "trained keywords" ? please..

    saftle
    Author
    Feb 2, 2023· 2 reactions

    This is a merged model, so it wasn't trained on any trigger words. I suggest looking through the merge recipe on the right of each version to see if some of the trigger words on those survived the merge. I focused primarily on natural language to get good results.

    MrkalashFeb 3, 2023· 4 reactions
    CivitAI

    How can I get the AI to generate the same woman again? I've got some output images with a woman I really like. SD generated several images of her using very similar prompts but she then disappeared from subsequent runs. I've tried using img2img but she never looks the same or even close.

    How can I generate a series of images of the same woman based on an image?

    FeliciaSerenityFeb 3, 2023

    You can drop a previously generated PNG into the "PNG Info" tab in WebUI and send all the settings back to txt2img, then you can careful add or remove prompts. I found that changing even the resolution or sampling steps can create wildly different results even from the exact same prompts.

    MrkalashFeb 3, 2023

    @FeliciaSerenity Thank you, I'll try that.

    ELZEN6Feb 3, 2023

    train a textual inversion

    llmapsFeb 6, 2023

    As FeliciaSerenity has stated, the slightest variable and change to ANYTHING will make the image different, it's nuts. But the thought I had was to get the "seed" of the picture of the person you like, and keep everything as exact as possible: prompt, negative prompts, iterations, CFG, size of the image (output and input, as in IMG2IMG), etc.

    BobconbobFeb 23, 2023

    get automatic1111

    1. Create a embedding with the face

    2. Do img2img and mask just the face on multiple models

    3. redo the embedding with more files

    Then the embedding will draw the face

    PlasteredDragonFeb 4, 2023
    CivitAI

    Has anyone had any success getting any kind of sex poses other than penis-in-orifice type stuff?

    Can't seem to get any sort of cunnilingus, analingus, female masturbation, mutual masturbation, sixty nine, tribadism, or even licking or kissing nipples to happen.

    saftle
    Author
    Feb 4, 2023· 7 reactions

    Yeah this is what I'm working on for the next release. Cunnilingus, lesbian porn, etc, will be possible. Also better man/woman porn from afar will be possible and no longer just POV shots.

    ImpoadFeb 8, 2023

    @saftle will the next release continue to be based on 1.5? I still prefer 1.5 and haven't really used 2.x stuff, not to mention that 2.x is more resource intensive. Also any ETA for the next release? Thanks.

    saftle
    Author
    Feb 8, 2023· 1 reaction

    @themonkeysaur yep, it's still SD 1.5 based :). Hopefully in the next few weeks if all goes well!

    ImpoadFeb 8, 2023

    @saftle Awesome, thanks for the reply!

    hs244Feb 9, 2023· 1 reaction

    Yes, please improve lesbian and female masturbation! Often the hands are in the correct place but usually it's just a mess down there. Hopefully when the lesbian stuff is done it can be made good from afar too. Thanks for the great model!

    SEVUNXFeb 4, 2023
    CivitAI

    amazing work!! cheers

    image

    saftle
    Author
    Feb 4, 2023

    Thanks! Be sure to leave a review, that way your image can also be rendered!

    misije2790Feb 4, 2023· 4 reactions
    CivitAI

    getting decent results with inpainting, but it seems to leave hard edges where the mask is being drawn? whereas something like dall-e will take the edges into account and seamlessly merge them into the image

    apopppFeb 6, 2023
    CivitAI

    Did you know how to keep the same face and/or body for new generation ? Thanks

    saftle
    Author
    Feb 6, 2023· 3 reactions

    You'll have more luck if you use the DDIM or Heun sampler, keeping the same seed, and then adjusting the prompt.

    PlasteredDragonFeb 12, 2023

    As long as the poses and lighting are consistent you can use inpainting to move the same face on to multiple subjects. And I suppose you could train a hypernetwork from that to give a name to that face and reuse it.

    mitililn241Feb 6, 2023
    CivitAI

    Hello. When I connect, it complains about the yaml file.

    Loading weights [50043a7805] from D:\StDf\models\Stable-diffusion\uberRealisticPornMerge_urpmv12.ckpt

    Creating model from config: D:\StDf\models\Stable-diffusion\uberRealisticPornMerge_urpmv12.yaml

    LatentInpaintDiffusion: Running in eps-prediction mode

    DiffusionWrapper has 859.54 M params.

    changing setting sd_model_checkpoint to uberRealisticPornMerge_urpmv12.ckpt [50043a7805]: RuntimeError

    Traceback (most recent call last):

    File "D:\StDf\modules\shared.py", line 549, in set

    self.data_labels[key].onchange()

    File "D:\StDf\modules\call_queue.py", line 15, in f

    res = func(*args, **kwargs)

    File "D:\StDf\webui.py", line 120, in <lambda>

    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))

    File "D:\StDf\modules\sd_models.py", line 467, in reload_model_weights

    load_model(checkpoint_info, already_loaded_state_dict=state_dict, time_taken_to_load_state_dict=timer.records["load weights from disk"])

    File "D:\StDf\modules\sd_models.py", line 406, in load_model

    load_model_weights(sd_model, checkpoint_info, state_dict, timer)

    File "D:\StDf\modules\sd_models.py", line 247, in load_model_weights

    model.load_state_dict(state_dict, strict=False)

    File "D:\StDf\venv\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dict

    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

    RuntimeError: Error(s) in loading state_dict for LatentInpaintDiffusion:

    size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 9, 3, 3]).

    saftle
    Author
    Feb 6, 2023

    You only need the yaml file for the inpainting model. It also needs to be renamed the same as the inpainting model. Right now it looks like you it is named the same as the regular model.

    mitililn241Feb 7, 2023

    @saftle Sorry friend, but I didn't quite get what I needed to do.

    The model is called - "uberRealisticPornMerge_urpmv12.ckpt".

    The Yaml is called - "uberRealisticPornMerge_urpmv12.yaml"

    I took the Yaml file from the link in the article (https://raw.githubusercontent.com/runwayml/stable-diffusion/main/configs/stable-diffusion/v1-inpainting-inference.yaml) and just renamed it as "uberRealisticPornMerge_urpmv12.yaml".

    What am I doing wrong?))

    Translated with www.DeepL.com/Translator (free version)

    saftle
    Author
    Feb 7, 2023

    If you are not using the inpainting model version, you can delete the yaml. It is only interfering, since it is only for the inpainting version.

    goodfunFeb 7, 2023
    CivitAI

    What process did you use to train this model?

    Can you make your training data (images and labels) available?

    Thank you :)

    saftle
    Author
    Feb 7, 2023

    This isn't a trained model. It's a straight up classical merge. You can find the exact recipe in the version description to the right of every version if you would like to even reproduce it :)

    goodfunFeb 7, 2023

    Thanks, that helps. Can you provide more information though. Do you use webui for the merges? How do you prune a model? Is it true that you can't merge with a pruned model?

    Do merges always work in the intuitive way that the new model will be able to generate images form both models and combine them in new ways like doing things that only model 1 knew like scenery and positions with actors from model 2 for example?

    Do you know how you can get a command line in webui? It would be useful for repeat commands and automation.

    mitililn241Feb 7, 2023
    CivitAI

    Sorry friends, but I didn't quite get what I needed to do.

    The model is called - "uberRealisticPornMerge_urpmv12.ckpt".

    The Yaml is called - "uberRealisticPornMerge_urpmv12.yaml"

    I took the Yaml file from the link in the article (https://raw.githubusercontent.com/runwayml/stable-diffusion/main/configs/stable-diffusion/v1-inpainting-inference.yaml) and just renamed it as "uberRealisticPornMerge_urpmv12.yaml".

    What am I doing wrong?))

    saftle
    Author
    Feb 7, 2023· 1 reaction

    I already answered this in the other issue :P. If you are not using the inpainting version, delete the yaml file. It is interfering since it is not meant for the regular version.

    JeanDuortulFeb 7, 2023
    CivitAI

    How do I install and use the requires inpainting models file, please ?

    saftle
    Author
    Feb 7, 2023

    What UI are you using?

    165790Feb 8, 2023

    @saftle I'm using Automatic1111, can u please let me know how to install

    saftle
    Author
    Feb 8, 2023· 1 reaction

    @creepah This should set you up :) https://www.youtube.com/watch?v=3cvP7yJotUM

    MirandoPaCuencaFeb 7, 2023
    CivitAI

    Hi. I'm veery new in SD. I'm using NMKD SD UI. How can install this model? I need the .ckpt file or can I download the safetense and copy it on the 'models' folder?. Thanks, awesome work!!!

    saftle
    Author
    Feb 7, 2023· 1 reaction

    It's possible that NKMD still has problems with models that were merged using Automatic1111. I would try grabbing the Pruned Pickletensor version. If you click the arrow, you'll see all files.

    MirandoPaCuencaFeb 7, 2023· 1 reaction

    @saftle Thanks for your counsel; I've installed finally CMD 1-click SDUI. Downloaded the file version recomended by you and all OK. Thanks, awesome your work!

    CheesyWaferFeb 8, 2023· 2 reactions

    I'm using this on NKMD 1.9.1 (with an AMD GPU) and it works fine, i just downloaded and converted the safetensor file directly into the ONNX format. Great work, thanks!

    dummy7002Feb 9, 2023· 1 reaction

    I just converted the model in NKMD itself, works perfectly. It's under the development tools, upper right corner.

    EZorgFeb 19, 2023· 1 reaction

    Tried converting to ONNX but it generates an error

    uvencoFeb 23, 2023


    I have only one question: How do you do the fingers? I mean, do you generate until you get a good option, or do you somehow know how to make the right fingers? Maybe some kind of extension?

    TimmyTonyFeb 7, 2023
    CivitAI

    I'm interested in your "Jedi merge" patreon. But I have some questions, is it possible to discuss in private?

    saftle
    Author
    Feb 7, 2023

    Ah okay great, you can reach me on discord if you like: Saftle#5898

    goodfunFeb 7, 2023· 3 reactions
    CivitAI

    wouldn't this model benefit from being merged with the main stable diffusion model to get more variety in people and locations?

    saftle
    Author
    Feb 7, 2023

    It isn't a trained model, so it has loads of SD already in it :)

    fifacity233Feb 7, 2023

    id love to know the official answer to this question, have been trying to use sd modifiers with minimal success. great question

    goodfunFeb 8, 2023

    @saftle what do you mean? You don't list SD as one of the models you merge. This model is good but it keeps repeating faces. It seems to have a limited set of faces. It also keeps putting identical people in the same image and often generates 2 people when asking for only 1 that look identical. Why does this happen? Is there a way to avoid this?

    saftle
    Author
    Feb 8, 2023

    @goodfun I don't list SD because every model has a SD base. Everything I merge has SD in it and I'm mixing that in mine.

    Alot of models are focused on asians, so you'll get alot of asian faces. Instead, I suggest specifying hair color, race, etc, then it does a good job of providing tons of variation.

    goodfunFeb 8, 2023

    Thanks for the tips. So the models you merged are not standalone trained models at some point up the ancestry tree? They are all trained on SD as a base or it's just common to merge it into SD? A little more detail would help my understanding.

    saftle
    Author
    Feb 8, 2023

    @goodfun Yep, all of the models I merged in are most likely trained on an SD 1.4 or 1.5 base. There are exceptions however, like with at least one model I am now incorporating in the 2.x version.

    jimmyjazzjazzFeb 8, 2023· 4 reactions
    CivitAI

    It seems impossible to get smaller breasts with this model, regardless of prompts or negative prompts. Anyone have any advice?

    saftle
    Author
    Feb 8, 2023· 4 reactions

    You could try adding "big breasts" in the negative prompt. However, soon I may be able to help with this by merging in a model that is focused on breast sizes.

    MisterrorFeb 11, 2023

    prompt helped me: flat or flat breasts

    lhucklenFeb 13, 2023

    try [busty] in negative

    bohed7Feb 13, 2023

    add (((huge boobs))) to your negative prompt

    fractFeb 9, 2023
    CivitAI

    I just can't get the inpainting to work. ive added the config file in the same folder. everytime I try to load I get "error"

    It's the only model im having this issue with. Any help pls?

    saftle
    Author
    Feb 9, 2023

    What UI are you using?

    fractFeb 11, 2023

    @saftle webui / automatic1111 thing

    saftle
    Author
    Feb 11, 2023

    @fract is the config the same name as the inpainting model? it should not be the same as the regular URPM model

    goodfunFeb 9, 2023
    CivitAI

    Getting several women with different faces and races in the same shot seems to be a challenge for the model. Most have the same faces. Do you have any tips for this?

    manclark128811797Feb 10, 2023

    same problem with dupes

    TigrabrightFeb 10, 2023

    in the negative prompt box add words such as: twins, same faces, same people, duplicates, gemini

    goodfunFeb 10, 2023

    Added negative prompts like that but had mixed success with them. They were still mostly similar. Are there specific negative keywords that work better than others? Does it work with a group of say 5-10 people? With 5 or more people sometimes there was some variation but not too much, lots of repeats still.

    robot1meFeb 10, 2023

    @goodfun As a suggestion, you can try if using the Unprompted extension for Automatic1111 web UI helps. There was a recent addition where an image can be used to get varied prompts from CLIP. You could use this to get most intriguing word combos for prompt descriptions, which can help to alter the human appearances more. You can also try the word blending syntax from Automatic1111.

    goodfunFeb 10, 2023

    @robot1me Thanks for the suggestions. The word bending looks interesting but likely won't directly help with this issue as specifying different unique words without word bending didn't really produce the variance in one image but it's useful to know about.

    hs244Feb 10, 2023
    CivitAI

    Should I be getting an "Image CFG Scale" option when I load the pix2pix model in Auto1111? Like with the standard pix2pix model? I'm not seeing it.

    saftle
    Author
    Feb 10, 2023

    That is very strange. It works for me. I see the Image CFG option on both the pruned and unpruned models. Have you tried updating Automatic1111 and potentially redownloading the model?

    scruffynerfFeb 10, 2023

    make sure file is named with instructpix2pix name

    Ispy23Feb 10, 2023

    I've renamed the model and it isn't working for me either.

    ramnnvFeb 10, 2023

    Update your stable difussion installation.

    hs244Feb 10, 2023

    As someone else stated, the addition of native pix2pix support in Auto1111 is pretty new. I've updated mine and now it works.

    hs244Feb 10, 2023· 1 reaction

    @saftle Thanks, I needed to update Auto1111, apparently pix2pix support was only added a few days ago.

    ZecondDec 22, 2023

    I'm not able to get it to work for some reason. I loaded up the example image in img2img and tyed give her a had and I get several stacked realistic heads with hats...

    abskingFeb 10, 2023
    CivitAI

    How are you guys generating these? I mean, are you generating them locally, or using an online service? I tried using Stadio, but when I tried to run the prompt it made a new tab to a local address or something?

    The_WatcherSDFeb 10, 2023· 9 reactions

    locally using automatic1111 web ui with one hand. Alternatively you could use a colab notebook and just live with the shame.

    jtomriddleFeb 10, 2023· 3 reactions

    Google Notebook is the best option

    lastenemyFeb 10, 2023
    CivitAI

    Will you use the latest SD 2.1 in the next merge?

    saftle
    Author
    Feb 10, 2023

    Nope, it'll be SD 1.5 for now. There isn't enough porn related models on SD 2.1 to merge with yet.

    lhucklenFeb 11, 2023
    CivitAI

    all i am getting is noise with the pix2pix model.

    saftle
    Author
    Feb 11, 2023

    I've heard a few people having this issue. I have no idea what can cause it. Try the pruned model just in case. It is a seperate download if you press the down arrow to the right of the download button.

    ZecondDec 22, 2023

    @saftle Did you ever figure it out? I am able to use the pruned and inpainting models without issue, but even using your example image and typing give her a hat just gives me noise

    ZecondDec 22, 2023

    @saftle I think I got. Default denoising strenth is way too high. Changed it to 0.2 and got way better results, though still not as good as with the inpainting model

    soloconFeb 12, 2023
    CivitAI

    I wanted to continue training the model but Dreamboth and Stable Tuner are unable to use it. Standart 1.5, hassanblend etc. works fine. Any ideas what you have done with it so it doesn't work for training =) ? it's an awesome model, so it's a shame it can't be trained further anymore >__<

    PlasteredDragonFeb 12, 2023· 15 reactions
    CivitAI

    The latest version seems to produce more quality output although it still can't create coherent sex scenes, particularly any sort of lesbian activities.

    But with the right prompts it seems to be able to produce some wonderful output, even non-porny output if desired (although that can be challenging at times). "Here's the dental assistant you asked for with her tits out." "No, no wait, I never..." ;-)

    For negative prompts I have been using:

    (((nudity))), (((bare breasts))), (large breasts), child, childish, b&w, anime, photo, cartoon, painting, drawing, (worst quality, low quality), distorted, disfigured, deformed, (bad anatomy), extra limbs, extra fingers, missing limbs, missing fingers, extra bellybuttons, ugly, extra head, (deformed face, bad eyes)

    The model tends to make breasts bigger than they are in reality, (large breasts) on the negative prompt can be removed if the subject actually has large breasts.


    Obv. remove (nudity) and (bare breasts) if you want that in your output. I like ballerina output sometimes, and you may want to add ((tutu)) to the negative prompt if you just want a ballet dancer in a leotard without a tutu.

    Subjects I have had success with where XXX is the name or description of the subject:

    XXX as a [ballerina, manipuri dancer, bellydancer, exotic dancer, burlesque performer]
    XXX dressed as a [high school cheerleader, college cheerleader]
    XXX modelling [activewear, one-piece swimsuit, bikini, silk kimono, lingerie of various colors and styles]
    XXX as a [pin-up girl, cowgirl wearing a pretty summer dress, cherokee woman, flapper girl, bride, geisha, cybernetic sexbot]

    I've found that using the expression "modelling YYY" instead of "wearing YYY" gets you much better posed and professional looking shots, and less "candid" shots.

    Pictures of sleeping subjects seem to work best if you emphasize ((asleep)) and ((viewed from above)).

    Model does well with pregnant subjects but may add extra navels.

    The AI has no concept of individuality -- you can create scenes with multiple clones of the subject (e.g. Jane Celeb kissing Jane Celeb). It does hairstyles well ... seems very capable of creating believable renditions of subjects with "pink pixie hair cut" for example.

    It is able to properly do body paint and body oil, and is very capable of rendering fantasy subjects like angels, elementals, vampires, even mermaids (although it will take a lot of tries to get the tail right.)

    Finally, if you are into tentacle porn, it actually tends to do that much better than real porn -- probably because you can easily recognized a deformed person, but you can't really deform a mass of tentacles.

    That's pretty much everything I have experimented with. What kinds of prompts have other people been trying?

    saftle
    Author
    Feb 12, 2023· 2 reactions

    Very cool write up. Luckily lesbian and vaginal oral sex is coming soon!

    PlasteredDragonFeb 12, 2023

    @saftle Will cunnilingus and the like be based on SD 1.5?

    saftle
    Author
    Feb 12, 2023· 1 reaction

    @PlasteredDragon Yes it will. Once there are enough NSFW 2.x models out there, I'll think about making a new URPM on 2.x as well

    Meatstorm87Feb 17, 2023· 1 reaction

    I've had great success with "goth" "horns", "glasses," and "choker". Also "rainbow hair" or "pigtails" are great. I would like to see more body piercings, they are rare. There is still a lot to try.

    goodfunFeb 13, 2023· 4 reactions
    CivitAI

    what is the effect of combining networks with different weights? For eg. if you combine A and B with weight 50% does that mean that everything from B will equally as important/learned at from A?

    What if you combine A, B, C, D, E, F, and G at 50% does that cause things to be forgotten from the earlier networks like A and B?

    How do you pick good weights when combining multiple models?

    msomerFeb 14, 2023

    this only the weights know. we don't understand them. we just try some calculations and see the results

    goodfunFeb 14, 2023

    @msomer do you have a set of prompts you keep re-testing every time you do a merge and if it doesn't work as well as before, you redo the entire merge and try a different weight?

    HachipediaFeb 16, 2023

    @goodfun that’s the fun part.

    goodfunFeb 16, 2023

    @Hachipedia that also didn't answer the question :)

    msomerJul 17, 2023

    @goodfun no, I guess not

    MisterrorFeb 14, 2023· 11 reactions
    CivitAI

    Is there any way to create a pussy licking image?

    jasonyangFeb 15, 2023

    Just curious, what's your use case? I'm thinking of training my own.

    saftle
    Author
    Feb 15, 2023· 2 reactions

    This is currently possible in my WIP builds, but it's still not quite stable yet. Which is why it's patreon only bleeding edge builds for testing essentially. Once it's ready and stable, it'll absolutely be made public. You can see examples on the discord server.

    It works for both M2F and F2F vaginal oral sex.

    @saftle when do you think it will be released to the public?

    saftle
    Author
    Mar 3, 2023

    @reelobamanotfaklol708 I wanted to release it, but there are a few glaring issues when doing non-oral sex. The person that made the model is retraining it currently, so that I can re-attempt to merge the new version later.

    @saftle will you add oral sex to this model or create a new one?

    saftle
    Author
    Mar 4, 2023

    @reelobamanotfaklol708 It'll be added to this one somehow, but only once the other model is trained. The training is done on base SD 1.5 and then will get merged into this one.

    jonpreston2023Feb 16, 2023· 4 reactions
    CivitAI

    Any chances we might have the ability to add tan lines?

    107363Feb 16, 2023
    CivitAI

    how can you make it generate with a self trained face. LORA?

    msomerMar 17, 2023

    yes Lora works fine. Best way is keep the Lora weight low when generating image (between 0.2 to 0.4). once you get the picture you want, upscale it by factor 2. and then use inpainting to make the face with a trained model or Lora this time weight to be adjusted to 0.6-1.0.

    hope this helps

    XclusivFeb 17, 2023
    CivitAI

    Where do I add .inpaint file in stable diffusion folder?

    saftle
    Author
    Feb 17, 2023

    If you're using Automatic1111 you put both the model and the config file in the same directory as the other models.

    Petter123Feb 18, 2023
    CivitAI

    Is that just put ckpt into stable-diffusion-webui\models\Stable-diffusion is ok ?

    saftle
    Author
    Feb 18, 2023

    Yup, or the safetensors version :)

    a379896403908Feb 20, 2023
    CivitAI

    I got a problem loading this model uberRealisticPornMerge_urpmv12.safetensors :

    "Loading weights [381b9a7f51] from /home/zhc/softwares/stable-diffusion-webui/models/Stable-diffusion/uberRealisticPornMerge_urpmv12.safetensors

    changing setting sd_model_checkpoint to uberRealisticPornMerge_urpmv12.safetensors [381b9a7f51]: RuntimeError

    RuntimeError: shape '[1280, 1280, 3, 3]' is invalid for input of size 13651981"

    55555555555555555555555555555555555555kujiji

    clevnumbFeb 22, 2023

    Re-download it? probably a corrupt model...maybe.

    a379896403908Feb 23, 2023

    @clevnumb I downloaded another model and it worked

    rgerherehhererhFeb 21, 2023
    CivitAI

    Can someone give me good settings? I use the inpaint model, but it just makes a lot of breasts, no naked body, also the tits are a bit weird sometimes

    Simon9793Feb 23, 2023

    just need (((nude)))

    patrikFeb 21, 2023· 7 reactions
    CivitAI

    why doesn't this model appear on the main civit.ai page?

    CyborgPFeb 21, 2023· 1 reaction

    it's not even showing up in liked models. Any others not showing as well or just this one? Just found one more, Clarity is gone from view unless you have a direct link

    lukinhasssssFeb 21, 2023· 1 reaction
    maybe it's because it's porn, and because it can be used with famous faces!
    CyborgPFeb 21, 2023· 1 reaction

    Found the answer, you have to go to your filters and select Adult instead of Everything.

    ReinaaaaaareibFeb 21, 2023· 1 reaction

    I also want to know, I can't even put it in the stable diffusion collab

    patrikFeb 22, 2023

    @CyborgP thanks mate its work

    lukinhasssssFeb 21, 2023· 8 reactions
    CivitAI
    is there any tutorial to generate my own model?
    SdragonFeb 22, 2023
    CivitAI

    is it my setting is wrong or the model do work ral bad along with ControlNet? the face and limbs' end fused and lose close to all details( comparing with direct text2img)

    pepareh644291Feb 22, 2023
    CivitAI

    When 1.3?

    SEVUNXFeb 25, 2023
    CivitAI

    hi saftle, is your model available at huggingface?

    saftle
    Author
    Feb 25, 2023· 2 reactions

    It isn't currently, but I am thinking of making a diffusers version of it after I release the next update.

    saftle
    Author
    Feb 27, 2023· 2 reactions

    It is now on huggingface. Both a diffusers and onnx version :)

    https://huggingface.co/saftle/urpm
    https://huggingface.co/saftle/urpm_onnx

    SEVUNXFeb 27, 2023· 1 reaction

    @saftle woahhh!! thanks Saftle 🥂

    crazyblok271Feb 25, 2023
    CivitAI

    was able to get some lesbian out of it, and full body shots with minimal experience with prompt making, very nice model and very fun! :)

    crazyblok271Feb 25, 2023

    also i dunno how to attach images, i would show but i cant lol

    COBatmanFeb 25, 2023

    If you edit your comment, you have the ability to attach photos.

    CreedlennFeb 25, 2023
    CivitAI
    Are there ways to use this with Controlnet, ie are there porn poses to match this model?
    turnipchemical8583Feb 28, 2023

    Should be fine, controlnet works with any model....

    SD_AI_2025Feb 28, 2023

    Are there ways to NOT use ControlNet with any model ? Nope.

    Why think complicated when you can think simple?

    ControlNet is an extra layer to help with the composition of images. No limitations whatsoever.

    maidwakaFeb 25, 2023· 2 reactions
    CivitAI

    Loving this! Thanks so much for making it.

    Does somebody have advice on how to add writing to a clothing item? Say, add text to a choker or shirt?

    crazyblok271Feb 26, 2023· 1 reaction

    you can add nsfw or nudity into the negative tags for no nudity, if you get lucky you can sometimes get a single shot where say a woman has a shirt on but no pants, its mostly luck based though (at least for me)

    daydreamertmmFeb 26, 2023· 1 reaction

    If you use Controlnet, you can add quality writing by combining a few of the models. You could also try just adding writing in your favorite image editor.

    daydreamertmmFeb 26, 2023· 1 reaction

    @crazyblok271 There's actually a textual inversion that specializes in shirts and no pants. I think the tag/name is shirt_p. A quick search here should find it. 

    Checkpoint
    SD 1.5

    Details

    Downloads
    83,437
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/12/2023
    Updated
    5/16/2026
    Deleted
    -

    Files