CivArchive
    Preview 23319542
    Preview 23319535

    Latest | 🌐 Main | ⚡️ Ligthning/Hyper | 🌀 Alt version | 🖌 Inpaint | 📦 Old

    🔸All Versions include VAE

    REFERRAL CODE: DMD9XGMN

    📩 Vote which version do you prefer on the generator

    v2.3 ULTRA is a distinct variant of the standard v2.3.
    While it technically improves overall output, my intention is to treat this version as experimental, exploring potential directions for future updates.

    This release brings more natural and balanced lighting (also darkness), enhanced skin detail and an increased realism.

    If you're using this model for Furry or Fur-based prompts, make sure to use the "Furry" trigger, or pair it with my Fur Enhancer LoRA for best results.

    I have written an article that explains and provides some useful resources to help you achieve great generation results. You can check it out for detailed insights and recommendations.

    🔁 Samplers

    • DPM++ 2M SDE Karras

    • DPM++ 2S a Karras

    • DPM++ SDE Karras

    • DPM++ 2M SDE Exponential

    • DPM2 a

    • DPM++ 2S a

    • DPM++ 3M *

    • Euler A

    ⚙️ Generation Settings

    • Steps: 30 or more

    • CFG Scale: 6–7

    • Clip Skip: 2

    • Resolution: Greater than 1024px

    🗒️ Notes

    • Use Danbooru tags

    • Keep individual prompt weights ≤ 1.5

    • Use "female/male" instead of "woman/man" for better tagging compatibility

    🖋️ Prompt Style

    • Positive Prompt Tags: score_9, score_8_up, score_7_up, BREAK

    • Negative Prompt Tags: score_4, score_5, score_6

    ⚠️ Avoid DPM++ 2M Karras (Not recommended for generation)

    Recommended Samplers

    • Euler A

    • DPM2 A (Best for detail)

    Read the following article for tips and my training preset

    ⚡️ Buzz for the Best Images ⚡️

    Description

    🚨 This is the release for an Inpainting version, the goal is to gather feedback for next releases.

    ℹ️ Recommended Settings:

    • Samplers:

      • DDIM

      • DPM++ 2M SDE Karras

      • DPM++ SDE Karras

      • DPM++ 3M SDE

    • ·Steps: +=30

    • ·Mask Blur: 6

    • ·Whole Picture Area

    • ·CFG: +=7

    • ·Denoise: 0.77-0.79

    • ·Masked content: Original

    ·I recommend a batch size of at least 2

    ALL FEEDBACK IS WELCOME, PLEASE SPECIFY THE VERSION OF THE INPAINT IN THE COMMENT (Inpaint_1.0 for example). THANKS :)

    FAQ

    Comments (367)

    Showing latest 248 of 367.

    Dev_NomadJul 17, 2024· 3 reactions
    CivitAI

    Thanks! A great model with great features! Thanks for the work!

    BussyEater556Jul 17, 2024· 2 reactions
    CivitAI

    BE556 Approved

    BigSad11Jul 17, 2024· 2 reactions
    CivitAI

    I've tried to like this model considering it's so highly rated but, every photo seems grainy. Film-grain. I understand that adds to the realism but it's overwhelming. It's like watching a low budget cinematic film. Is it just me? I notice most of the photos people upload have it too unless they use an embedding. Is that just the way this model is or am I missing a keyword? Any help to make it crisp/sharp realism over film grain realism would be helpful. Thanks!

    DD13Jul 17, 2024· 1 reaction

    Are you using DPM++ 2M Karras sampler by any chance? Pony Realism is very picky and won't work with most samplers. On CivitAI you can use Euler A (cheaper, but adds some blur and less detail) or DPM2 A (more expensive, but better details). Locally you should use one of SDE samplers as recommended in the model description.

    ZyloO
    Author
    Jul 17, 2024

    Read the description.

    OmikonzJul 17, 2024· 1 reaction

    Also put ‘blurry, blurry face,’

    In negatives

    BigSad11Jul 17, 2024

    @ZyloO I did ZyloO. I mean no disrespect. I just can't get the results I'm seeking. I'll try again. The only thing I didn't understand was [Use Danbooru tags]. No idea what Danbooru tags are..

    PolraudioJul 17, 2024

    @BigSad11 Probably means use tags that are common on the website Danbooru

    BigSad11Jul 17, 2024

    @Polraudio Thanks! Didn't know about that website.

    ZyloO
    Author
    Jul 17, 2024· 4 reactions

    @BigSad11 I didn't mean to sound rude, I was on my phone and wanted to respond quickly.

    If you are using the site generator the only samplers you can use are Euler A and DPM2 A.

    The grain effect is mainly due to using a sampler that is not compatible, whether you are in local or using the site generator, copy some prompt from the Showcase images and try to see if you get something, I don't know if I can help you more, gl

    BigSad11Jul 19, 2024

    Thanks for all the replies! I figured it out. It was the sampler and my prompting. I was still getting grainy on DPM++ samplers when I wrote this which is why I was confused. But, after altering my negative prompt, it went away. Errors and artifacts in negative prompt seem to do the trick.

    semiho551541Jul 17, 2024· 3 reactions
    CivitAI

    Woww perfect

    mwood311Jul 18, 2024· 3 reactions
    CivitAI

    Amazing model. I use it daily. Thank you for sharing this!

    Tammie590Jul 18, 2024· 3 reactions
    CivitAI

    Perfect model for realism^^

    cireng2002Jul 18, 2024· 3 reactions
    CivitAI

    Nice

    milpredoJul 18, 2024· 1 reaction
    CivitAI

    I like this model, but it lacks diversity, it can't properly generate asians, and south east asian, and ends up generating white skin if I add 'white shirt', despite having ((brown skin, south east asian)) in my prompt.
    Edit: looks like the problem was just my prompt is too long, it confuses the model.

    milpredoJul 18, 2024

    this model looks like a painting on smaller resolution too, sometimes, even in higher resolution

    ZyloO
    Author
    Jul 18, 2024· 1 reaction

    @milpredo Did you read the description?
    Also, prompting "white shirt" does not generate white skin, you must have something wrong.
    https://civitai.com/posts/4553225

    milpredoJul 19, 2024

    @ZyloO alright ima test it out again, I have tons of prompts, maybe that's why it's acting weird with "white shirt" and somehow confusing it with white skin.

    memo45Jul 19, 2024· 2 reactions
    CivitAI

    I downloaded the Inpainting 2.1 model. I am using easy diffusion 3.09 as the program. There is no pony file in the Models section. When I add it myself and insert it into it, this model does not appear in the program. There is a stabble diffusion file and when I put it in there it says something like insufficient ram. I have a 32gb ram rtx 3070, I think it will be enough, but I think I put the model in the wrong place, where should I put it?

    michifuzoteJul 19, 2024· 2 reactions
    CivitAI

    This is the best model I tried for XL, but what would be its equivalent version for 1.5?

    KokunutmanJul 19, 2024
    CivitAI

    Fantastic model... though a little difficult to get some dark skin/fur out of it. :)

    And what's so speciel about these "Danbooru tags" being mentioned? I looked it up, and it just seems to be a huge collection of tags, that logically comes to mind when prompting an image - like "bat ears" if you want "bat ears". :p

    ZyloO
    Author
    Jul 19, 2024

    Pony is trained on those tags, so you will get better results using them

    TheCALJul 21, 2024

    Danbooru tags I believe are just the tags that the r34 image hosting sites go by, We should be using those in our prompts I believe

    Grayson1010Jul 20, 2024
    CivitAI

    really solid and consisrant

    kkkub313Jul 20, 2024
    CivitAI

    nice

    mrborneo1919Jul 20, 2024
    CivitAI

    Awesome

    jpgranizoJul 21, 2024
    CivitAI

    Does the Lighting 8S model work in ComfyUI? In the "about" section there is a note that it only works in A1111 / Forge

    rfoxrdj4u6dp9eua0002128Jul 21, 2024· 1 reaction
    CivitAI

    can it be used for inpainting as well?

    wetsocksJul 21, 2024
    CivitAI

    I'm doing something wrong, I even copied prompts from the samples on this page, but my results end up blue and severely grainy. The preview tab while in progress shows them being fine up until they finish. What am I missing? It happens on every pony model.

    wetsocksJul 21, 2024· 1 reaction

    It was the wrong VAE file. Swapped out for sdxl and it works.

    ZyloO
    Author
    Jul 21, 2024· 2 reactions

    @wetsocks The VAE is baked in, so you can leave the option in automatic

    sonicfanlover459Jul 21, 2024
    CivitAI

    This is good to good it’s a shame the filter throws a fit on most of my post using this

    karavarJul 22, 2024· 1 reaction
    CivitAI

    Very good

    PinkywDreamsJul 22, 2024
    CivitAI

    which version should i download for Comfyui sdxl

    DecoySandroJul 22, 2024· 1 reaction

    wondering the same thing. version 2.1 wont show up in my checkpoint loader :(

    orisioJul 22, 2024
    CivitAI

    What would be the best parameters for ultimate SD upscale using this checkpoint?

    strangemusicJul 22, 2024
    CivitAI

    Often I use this model and it come up with so many artifacts. Only change one word or color

    ZyloO
    Author
    Jul 22, 2024· 1 reaction

    Are you using the correct sampler?

    Read the description

    strangemusicJul 23, 2024· 1 reaction

    @ZyloO yes

    ZyloO
    Author
    Jul 23, 2024· 1 reaction

    @strangemusic So if you are using a LoRA it may be because of its strength.

    TheCALJul 25, 2024

    single prompt legnth and prompt strength. not too long or high

    ForgeMaster550Jul 23, 2024
    CivitAI

    Had some issues but overall good

    razr112Jul 23, 2024
    CivitAI

    How to prevent bad quality/artifacts when using the inpainting model in Fooocus:

    In the "Debug Tools" tab set "Forced Overwrite of Refiner Switch Step" to a value of 2

    Do this if you're not even using a refiner (I don't use one). This is in addition to changing the other settings as described in the "About this model" section for the inpainting model.

    michifuzoteJul 25, 2024

    Inpaint works and match with bodies but it gives me blurry results in fooocus, I don't know why :C Using Forced Overwrite of Refiner Switch Step with a value of 2 fixes the blurring issue but the painted parts don't match with the body and and it gives me bad results.

    razr112Jul 26, 2024· 1 reaction

    @michifuzote Yeah, that's definitely an issue. Try turning the "Inpaint Denoising Strength" to between 0.75 to 0.95. Leaving it at 1.0 sometimes works for me, but a lot of the time it causes misalignment. Also, try disabling the Inpaint Engine in the "Inpaint" tab (personally I leave it enabled, but it may help depending on the image).

    The other settings I use are: Guidance Scale: 7.5 // Image Sharpness: 2 // Sampler: dpmpp_2m_sde_gpu // Scheduler: karras // Sampling Steps: 32-40 // Loras: None // Refiners: None // Styles: None // Inpaint Respective Field: 0.618 (increasing this setting could also help misalignment)

    I get really good results with these settings but YMMV. If these settings don't work try playing around with the other settings. Eventually you'll probably find the ones that work for you.

    mrborneo1919Jul 24, 2024
    CivitAI

    Awsome

    barinovrs433Jul 24, 2024
    CivitAI

    That's cool. Tell me, it’s important to use clip skip 2. I just didn’t notice the difference between 1 and 2. And, as it seemed to me, the images turn out to be quite blurry, regardless of the resolution. How can you make them sharper?

    imanmasihinezhad950Jul 24, 2024
    CivitAI

    the checkpoint is perfect but have some problem with ipadapter and controlnet. is This problem for checkpoint?

    qq2537950081135Jul 25, 2024· 4 reactions
    CivitAI

    👍👍👍

    nsfwpersonalaiJul 25, 2024
    CivitAI

    I have a problem with this model when using it in qDiffusion because I get this.

    "Error while Encoding.

    stack expects each tensor to be equal size, but got [1280] at entry 0 and [768] at entry 1 (clip.py:71)"

    Any solution?

    michifuzoteJul 25, 2024· 1 reaction
    CivitAI

    Inpaint works and match with bodies but it gives me blurry results in fooocus, I don't know why :C Using Forced Overwrite of Refiner Switch Step with a value of 2 fixes the blurring issue but the painted parts don't match with the body and and it gives me bad results.

    milpredoJul 26, 2024· 2 reactions
    CivitAI

    add this to your prompt: umb
    makes a great art style that I've seen somewhere I don't remember
    Example:
    Image posted by milpredo (civitai.com)

    ja0390335Aug 1, 2024

    this looks fantastic! any other fun cartoony tags you found work well?

    HellBoundnoJul 26, 2024· 3 reactions
    CivitAI

    It`s a great model, but still has potential :) Looks like all the same faces. Have tried a number of models, but mostly use my own. And (weight) in pony models prompt I feel make it more 3D. Always try not to have anything in negative prompt, will get better and more vivid pictures then. And things that describe face/skin often work against their purpose as the models actually know how good the skin should look from the outset.

    sonicasino237Jul 26, 2024· 2 reactions
    CivitAI

    When using this with inpaint + controlnet openpose, it gives me
    "TypeError: 'NoneType' object is not iterable"

    unrealrealismJul 27, 2024· 1 reaction
    CivitAI

    Very good starting point with lots of potential

    ArcaneBlokeJul 27, 2024
    CivitAI

    How do use this in fooocus, do we need to directly upload the checkpoint or do we need to configure the VAE as well?

    AnandaJul 29, 2024· 1 reaction

    Easy. Just copy the checkpoint to Fooocus/models/checkpoints and you are good to go. Use default or ponyv6 and uncheck all styles. Change model to PonyRealism. Uncheck loras. With CFG 7 I get the best results. Don't use FooocusV2-Style. It is AI based and works not well with Pony models.

    LoimuJul 28, 2024· 1 reaction
    CivitAI

    It's good

    VariViJul 28, 2024· 5 reactions
    CivitAI

    Having a hard time getting more unique faces/different facial features. Other than that, it’s great

    3992255Jul 28, 2024

    Same here. Pretty much the same face always, only slight variations in age. The only way I've been able to figure out to get different faces is using character loras. Doesn't seem to care about any kind of descriptions at all.

    AnandaJul 29, 2024· 2 reactions

    It is a Pony model. You have to describe what you want. I get 1000+ different faces without any lora. If you do not vary your prompt, the faces will be similar.

    5018423Aug 2, 2024

    @Ananda can you please share some advice on how to get different faces? I am very new and having the same problem as the other commenters, and have tried many different things that I thought should work, such as ethnicity, nationality, age, describing specific facial features..... but I am always getting small variations of the same face

    AnandaAug 4, 2024

    @johnsonshardware I use Fooocus local on my PC. This has many styles to choose from. Each one gives me different faces. Also ethnicity, nationality, young, old, teen, mature, aged, hair style, hair color, body, species, country, dress, clothing, punk, hippie, asian, european, african, anime, kawaii, fantasy, futuristic, tribal, native, modern, fairy tale, manga, elf, adorable, sweet, ugly, all kinds of danbooru tags, character tags, etc etc. You have to experiment. Pony models (if based on Pony V6) are very powerful. If you know the tags, you are unlimited. Prompting is different from SDXL or 1.5 models. Avoid complex negative prompts. Keep them as simple and short as you can. They can produce conflicts and ruin your variations.

    AphaitasJul 28, 2024
    CivitAI

    I'm confused. Is this an XL model? Size says: Yes?
    Is there any Pony SD 1.5 model available? Search is currently broken.

    AnandaJul 29, 2024· 1 reaction

    It is a Pony model. Pony models are based on XL. There is no Pony SD 1.5 model.

    ThegayfootjobloverJul 29, 2024· 1 reaction
    CivitAI

    Very nice

    HieheiheiJul 29, 2024
    CivitAI

    Can someone please help me with ears? Ears are ruining more photos than hands. I've tried neg prompt (ears, earrings, ear jewelry:1.35) (exposed ears:1.4). Still the damn ears (which look like shit, why no adetailer for ears?). Any tips would be most welcome.

    AnandaJul 30, 2024

    I never had a problem with ears. I leave them untouched and they are looking good. I mainly use 1024x1024 and no adetailer. If I get bad faces/eyes/ears due to far away characters, I use at the end of the prompt "close face" which fixes it. If you use "ear" or anything related in the neg. prompt, you create a conflict, and the picture will look worse in most cases. Long neg. prompts tend to ruin every picture, so try to avoid them and keep them as short as possible.

    sklimaaJul 31, 2024· 1 reaction
    CivitAI

    All my images with this model turn out darker/less saturated, almost like the images in 1.5 would when you had the wrong VAE. Anyone have a fix for this?

    Fijas2Aug 6, 2024

    I've noticed this issue as well, but only when using iterative upscaling from the ImpactPack in ComfyUI. This happens both with the image and the latent versions of the upscaler. I don't think it's an issue with this model since it doesn't happen when I chain individual KSamplers in the same configuration with the same number of steps.

    ExayleAug 1, 2024· 2 reactions
    CivitAI

    Question about Fooocus : Did anyone manage to use the "FaceSwap" image prompt feature with this?
    I get really great results when I don't put any image prompts, but if I do add some for FaceSwap, suddenly the images turn into a cartoony style instead of realistic, even if the model on my faceswap picture is realistic.
    It really changes the style of the whole picture, from realistic to cartoon.

    TheCALAug 1, 2024· 2 reactions
    CivitAI

    For PonyRaelism which is the best "on paper" sampler to use from among the list you provided?

    ZyloO
    Author
    Aug 1, 2024· 1 reaction

    If you are using the site generator then yes, DPM2 a. If you are on local DPM++ SDE, DPM++ 2M SDE and DPM++ 3M SDE are the best

    TheCALAug 1, 2024

    @ZyloO gotcha in Karras as always?

    ZyloO
    Author
    Aug 1, 2024

    @TheCAL Both are good options, I mostly use Karras

    ptndnAug 9, 2024

    I had some trouble outside of euler a.
    Here is an example I saw from someone else with the same texturing all over.
    https://civitai.com/images/23178383 euler a
    https://civitai.com/images/23178274 DPM++ 2M Karras

    kmikkelsenAug 2, 2024
    CivitAI

    Amazing work!

    shadow0Aug 4, 2024· 1 reaction
    CivitAI

    Still the best model for me right now.

    dioxidinAug 4, 2024· 1 reaction
    CivitAI

    Hello! I made a comparison of Pony models. 1 test prompt so far. More ongoing.

    https://civitai.com/articles/6491

    Comments / suggestions are welcome.

    RomloAug 6, 2024
    CivitAI

    Hi, thanks a lot for the model, it is great. is the api for this model protected ? i can't seem to load it into google colab so i just wanted to ask. thanks again.

    GreedyDoyAug 10, 2024

    ive been having the same issue. you could try uploading the model into your gdrive and link to that in colab.

    RomloAug 11, 2024

    @GreedyDoy i tried it, but it is still giving me error, can you point me to the right way if you have done it before ?

    LucifieAug 6, 2024
    CivitAI

    Playing around with the inpainting v1 model. Seems to work but you need way more generations than with 1.5 inpainting. The transitions are sometimes off as you see them when you inpaint with a non inpaint model. How you created this model? Is this a mix with the SDXL 0.1 inpaint model?

    ZyloO
    Author
    Aug 7, 2024

    Yes

    smallmenenjoyerAug 7, 2024
    CivitAI

    A wonderous and illustrious model for bringing my small men to life! Thank you very much! Hoho! Have a small men day!

    shadow0Aug 7, 2024
    CivitAI

    @ZyloO I know you are satisfied with the model, but for real, if you work on merging some more night/dark scenes and elements into it, it will remain at the top of the realistic Pony models for quite a while... no other model is close to yours, and night scenes and eye consistency are the only things your model lacks.

    TheCALAug 8, 2024· 1 reaction

    If we're lucky he'll do a remake for Pony 7. heres hoping

    ZyloO
    Author
    Aug 8, 2024

    You can get those results using Loras like this . So, making a new version just for this seems a bit unnecessary to me.

    NSFW Example

    shadow0Aug 8, 2024

    @ZyloO Very interesting. Maybe that's something that could be better communicated on the model's page? Because to me Vixon was just making a bunch of anime styles which don't interest me, so I would never have thought to even look for and then mix something from that page with a realism model.

    ZyloO
    Author
    Aug 8, 2024

    @shadow0 I thought about creating a guide with recommendations, examples and suggested resources, but most users don't even read the description of the model, so idk

    shadow0Aug 8, 2024· 1 reaction

    @ZyloO For sure most don't, but people who want to get in-depth and really use the model for more involved things (I am currently using it to do a rough 3D storyboard because Pony Realism actually has the ability to convey facial expressions and body poses) are probably the 1% that will come up with the most interesting gens and also the most likely to need a guide. Anyway... something for you to think about. But this is not criticisms or anything, so keep up the amazing work, it's appreciated.

    TheCALAug 8, 2024
    CivitAI

    What's the verdict using Boru quality tags? Any effectiveness? I have a worry that since theyre, geared towards art it may move our images away from realism (at least texturally). thank you

    MarissaAug 8, 2024· 1 reaction
    CivitAI

    With the main version I get an error “CUDA out of memory”. This doesn’t happen with other checkpoints with exactly the same settings. Would you know a way around?

    ZoophilianAug 12, 2024· 1 reaction

    I used to get this randomly after a gaming session and trying to render a batch of images that was to large, Example, I can do about roughly 20 images or so (Across a few batches) with out errors, but if i tell it to create 30 or so images, sometime into the process the CUDA error will pop up, also telling i to generate a different seed for every image in a batch can increase this greatly, My guess is you get this when your buffer for your vram is full.

    MarissaAug 17, 2024

    @Zoophilian Thank you very much. I’ll follow your advice and reduce number of images. Would it mean that I should restart PC before rendering?

    LucifieAug 9, 2024
    CivitAI

    When I do an inpaint comparison of "ponyRealism_v21VAE" and "ponyRealism_v21Inpaint10VAE" I get the same results. So the extra inpaint model does not make any difference or just does not work.

    ZyloO
    Author
    Aug 9, 2024

    The new inpaint version is merged with SDXL inpaint and ponyV6, it should be little better and easier to work with

    LucifieAug 9, 2024

    @ZyloO Where I can find it?

    ZyloO
    Author
    Aug 9, 2024

    @Lucifie Here. I did not create a new version, the file and the name was updated.
    Let me know your feedback for next versions.

    LucifieAug 10, 2024

    Now it makes slightly different images but still I don't see a reason why to use it. It's almost 99% the same as inpainting with the 'ponyRealism_v21VAE' model. So either 'sd_xl_base_1.0_inpainting_0.1.safetensors' is not working with pony or the merge failed.

    ZyloO
    Author
    Aug 10, 2024

    @Lucifie Yeah, that's probably why Pony won't have an official version of inpaint. I don't know if I can do anything else.

    txtswordAug 9, 2024· 1 reaction
    CivitAI

    This model rules

    EncartaAug 10, 2024
    CivitAI

    This is amazing, but... despite using danbooru tags and reading most of the pages about different tags on the danbooru site, I can't get it to work the way I want with certain things.

    If I put 'deep penetration' (which I was using prior to confirming on danbooru that's the official term) it barely makes a difference - even upping strength to 1.5, some positions seem to have deeper penetration anyway, but using the prompt whether with 'missionary, 'piledriver', 'suspended congress' etc. the cock is still about 50-75% outside the pussy.

    Also, a lot of times, whether I prompt for 'big cock', 'large cock', huge cock' again it doesn't seem to make a difference and seems more dependent on the position of the characters involved e.g it may have a huge cock for pov blowjob, put then convert to an average cock during pov missionary.

    Also on many occasions, if I put 'closed mouth' the character closes their eyes instead. So I use 'shut mouth' which seems to work, but I'm wondering if there's been some issue in the training that's confused the eyes and mouth?

    I'm not expecting these things to be fixed, but I'm more wondering if anyone has some tips for prompts or tricks to get the desired results consistently. Thanks.

    ZyloO
    Author
    Aug 10, 2024· 3 reactions

    1) After some tests on Pony V6 seems like "deep penetration" wasn't trained correctly or doesn't recognize it, so it might be an issue with the base model.

    2) Based on my tests, penis size control works, but some results can be very similar
    3) Check that you are not using the danbooru tag "mouth closed" and use "closed mouth" instead, some danbooru tags may be wrong trained on the base model. Example

    Hope this helps

    EncartaAug 10, 2024

    Thanks for a quick and thorough reply. I'm relatively new to all this (been learning about AI generative images, LORAs, fooocus then A1111, etc. for about 9 months) and was only recently learning about textual inversions and embeddings. And apparently one of them is a small file that teaches a concept to an existing model (if I'm remembering correctly). Would a 'deep penetration' embedding or textual inversion help with this issue?

    What I meant about with cock size is that in certain poses it will change, so the pov blowjob example you gave works, but when changing to a penetrative position, it's hit and miss, but usually shrinks compared to what it was before, even if I prompt for 'huge cock' when previously 'big' was suitable during bj. I just wonder whether it might scale based off the size of the recipient, or whether it has to do with the 'source' prompt e.g. previously I had been using 'source_cartoon' as that's what another user who's prompts I based my images off used, but then I removed it more recently to try and get more realistic appearance, and now I'm wondering whether it's trying to keep things more realistically proportioned (I'll have to experiment).

    I'm in the middle of reading through your Pony Realism Compendium, but I've had other things needing my attention, so sorry if you've mentioned all this in there.

    ZyloO
    Author
    Aug 10, 2024· 1 reaction

    @Encarta Personally, I haven't used embeddings in a long time (aside from some for negative prompts). Usually, I solve these kinds of issues with LoRAs, even if it’s with a low strength. So, I would suggest looking for an existing LoRA or training one. However, an embedding should also be helpful.

    The size of the penis can indeed depend on the perspective of the scene, but in general, I’ve found that it usually respects the prompt. Regarding the source_cartoon, it can affect body proportions. If you're aiming for realism, I recommend not using it.

    As before, you might need a penis size slider LoRA or something similar if you need precise control.

    I added that compendium part after seeing your comment, as it could be useful to others, so thanks for that :)

    EncartaAug 11, 2024

    @ZyloO oh crap! About the 'mouth closed' thing - I just checked the danbooru site and it doesn't have an entry for 'closed mouth', so the official tag is 'mouth closed'. I must have picked that tag up from somewhere else, but noticed it closes the eyes too. But in fairness, I didn't actually say I had used the danbooru tag in that instance in my original post. Sorry for the mix up!

    I didn't realise you'd only just released this guide. I've been playing around with this model for weeks now and came back to see what other's had been using for prompts to iron out these issues I'd mentioned, and by chance was in the right place at the right time. Like I said, I'm new to this PonyXL stuff after learning all manner of other stuff over the past months, so I don't know if there's other info out there regards 'source_cartoon' etc, in Pony base model info stuff, but that would be useful in the guide, maybe?

    And also the 'score_9, score_8_up, score_7_up, score_6_up' stuff, as I see different users using different amounts in both prompt fields. I'm guessing the training images had been given quality ratings and the user is keeping higher ratings, and eliminating lower ranked ones in the negative prompt? (This info may be out there somewhere and/or common knowledge to long time users. I just haven't got round to finding it all out yet. So don't write a massive guide just on my part.)

    ZyloO
    Author
    Aug 11, 2024· 1 reaction

    @Encarta Pony creator has a guide about the score tags

    EncartaAug 11, 2024· 1 reaction

    @ZyloO Thanks for the link to that guide, and your help answering my questions.

    H43yAug 11, 2024
    CivitAI

    Cool model, definately! But maybe somebody has a Hint for me, how i can get age Prompts to work.

    Usually i can use "55 years old" in my Prompts, but here it doesnt work, even with adapted styles.

    I tried:
    80 years old

    80yo

    80 yo

    80_years_old

    Not sure, but is there something i can use?
    The classic "word style" age prompts (like elderly, mature,....) are not preciese enogh.

    CBF2000Aug 12, 2024· 3 reactions

    Hi, the age references do generally not work very well, but there are a few LORA's out there that will do the trick. Just search for "Age Slider" and pick one that works with the Pony base model. You can then adjust the age by a factor that you apply to the Lora. With that you can very easily convert any character into a 30 year, 40 year, or 80 year old person. There are quite a few age sliders out there, some even specifically designed for old people.

    cyopAug 13, 2024· 1 reaction

    Try something like:
    mature, grayish hair, GILF, (55 year old), (age spots:0.3), (skin discoloration:0.2), wrinkles, body wrinkles, arm wrinkles

    CinderwolfAug 12, 2024
    CivitAI

    Is the VAE baked in to the model itself or is it a separate file? Just second-guessing myself because a large number of images I'm producing are non-sensical while a handful work.

    ZyloO
    Author
    Aug 12, 2024· 4 reactions

    Yes, the VAE is baked in

    tono4Aug 20, 2024

    Then, in A1111, must I select No VAE? I am getting very grainy pictures...

    CinderwolfAug 21, 2024

    @tono4 Wondering if it's resolution that's playing a part in that? The VAE option I think is blank or No for me too and tends to work. AI generation often breaks when the resolution is an odd one.

    ZyloO
    Author
    Aug 21, 2024

    @tono4 You should leave the VAE on automatic.

    Most people that get grainy images use the wrong sampler, so check the model description for that.

    LazmanSep 3, 2024

    @ZyloO On automatic1111? or a setting that is literally 'automatic'? I use ED, so I'm not sure if other settings exist in that. I tried to install a1111, but it refuses to admit that I have pytorch installed, even after several tries and making sure I was using the version it was asking for.

    tedbivAug 13, 2024
    CivitAI

    this model produces some really beautiful images. thanks for your efforts.

    fuinypainAug 14, 2024· 2 reactions
    CivitAI

    what is the hires fix parametre pls??

    GlikodinRealAug 16, 2024
    CivitAI

    I don't understand, if the prompt "(white long sweater)" is used, why does pony models generate a red short sweater? please someone explain to me https://civitai.com/posts/5492851

    ZyloO
    Author
    Aug 16, 2024· 1 reaction

    Thats because her lora its trained on that, try lowering the lora strength, and dont use that sampler please.

    GlikodinRealAug 16, 2024

    @ZyloO why? that sampler is recomendet, I'm just trying to understand, and don't repeat mistakes

    ZyloO
    Author
    Aug 16, 2024· 1 reaction

    @glikodin1987965 DPM++ 2M Karras is not recommended, look at the model description for the recommended samplers. If you only use the site generator you need to use Euler A or DPM2 a

    GlikodinRealAug 16, 2024

    @ZyloO I use comfyui, very grateful for the advice

    DarkAgentAug 16, 2024· 4 reactions
    CivitAI

    Can we get a 3.1 update? ^_^ <3

    dj9wqurfAug 17, 2024· 1 reaction
    CivitAI

    https://flowgpt.com/p/ai-2295 A recently found site for writing prompts with gpt bots

    MestreXotaAug 18, 2024· 2 reactions
    CivitAI

    Good in everything except eyes.

    eduardomoellecke800Aug 19, 2024
    CivitAI

    how do i get the vae?

    RomloAug 24, 2024

    just download the main21vae version, the vae included in it

    Kaiio14Aug 20, 2024
    CivitAI

    So the more anime I try to make the hairstyle look, the more anime the entire image becomes. And the prompt from the guide that makes things more realistic doesn't fully return the image to realism. It's also a known issue that the realistic prompt makes the faces look samey, so have there been any known solutions? How do you translate a specific anime character into a realistic image without them looking Japanese or like a plastic doll? I tried adding nationalities, but with such a long prompt to get the other things the character has, the face is hard to change.

    ZyloO
    Author
    Aug 20, 2024· 1 reaction

    Probably the best way is to train a realistic LoRA of the character you want, so you dont need to prompt for it

    Kaiio14Aug 20, 2024

    @ZyloO Sorry, I'm not at all experienced, wouldn't training a LoRA involve having already generated many realistic images of that character? I'm struggling to make one. I'd use cosplay, but the overwhelming majority of cosplayers are Asian. So far found out that adding freckles tends to make the character look more European.

    tono4Aug 20, 2024
    CivitAI

    Why get I so grainy images? I have set up "none"VAE and I follow rules... Somebody can support?

    ZyloO
    Author
    Aug 20, 2024

    Check the model description to see which samplers to use

    pixel8erAug 20, 2024

    Have you tried recreating an image using someone else's prompt and settings?
    The biggest impact seems to be in sampling method; it seems like the majority of the successful ones here use Euler a, even though the recommendations suggest DPM samplers.

    Nep_plootAug 21, 2024· 2 reactions

    @oddlittlepixel From what I understand, Euler a is recommended for on site generation, if you generate on your computer, then it's better to use DPM.

    tono4Aug 21, 2024

    It was VAE in general A1111 settings. I also set it to None and it works perfect!

    Thank you for your answers

    MagicalEroticaAug 24, 2024

    @tono4 Good ol' Vae grains.

    ElectrovertedAug 21, 2024· 1 reaction
    CivitAI

    Once you read the instructions and realize you need to put those score prompts in, this checkpoint is amazing!

    HazelPurpleAug 21, 2024
    CivitAI

    this checkpoint is amazing for looking realism style generate.

    ParadiseGoddessAug 21, 2024
    CivitAI

    it's possible to use it with controlnet?

    RomloAug 23, 2024· 1 reaction

    yes it is

    MiragenerateAug 23, 2024· 3 reactions

    Due to its nature, Pony-based models need certain controlnet models - heavier ones. Most don't work properly. I suggest controlnet-union-sdxl. It's essentially all-in-one model. I tested it extensively and it works well enough.

    ParadiseGoddessAug 23, 2024

    @Miragenerate thanks, I'll try it

    engineerAug 27, 2024

    @ParadiseGoddess Has it worked?

    leonixxdAug 23, 2024
    CivitAI

    Hello everyone! I'm trying to install this checkpoint in Jupyter Lab and nothing works. It doesn't open it in stable diffusion. Has anyone had this problem?

    yuinyan490363Aug 25, 2024· 2 reactions
    CivitAI

    Why does the higher the weight given to 'long legs', the shorter the socks become, for example, knee-highs turning into ankle socks?

    daijobuAug 29, 2024

    I guess it's because the AI thinks of "legs" as bare legs, not legs covered by socks.

    5194609Aug 26, 2024
    CivitAI

    I have had a lot of trouble with this prompt: from outside, window. It simply doesn't understand it :( But I managed to make it work with a Lora specifically made for this scenario.

    TheCALSep 1, 2024

    i have trouble with from outside view balcony, very hard

    MintBerryCrunchAug 26, 2024
    CivitAI

    hands down my favorite model

    601709Aug 28, 2024· 2 reactions
    CivitAI

    tbh. only model that holds up to the name "realism" WIth the right prompt, settings, and some latent tweaking, This model can output incredible imagery. + It has kept a shit TON of the guts from the original PonyXL v6 model. So good job on not ruining the base, like many do. Srsly. Good model

    Larry_LePoopAug 28, 2024
    CivitAI

    I'm a little confused; I've read that Pony models can be based on SD 1.5 or XL. Since this doesn't have XL in the name, I downloaded it. Previous XL models I've tried haven't worked on my computer (not enough VRAM). This model works fine on my computer, but the only Loras it can run are XL loras. So is this model XL, or 1.5? And which Loras will/won't work with it?

    ZyloO
    Author
    Aug 28, 2024

    This is neither a 1.5 nor an SDXL model; it's Pony, which is its own base model, although it is derived from SDXL. SDXL's LoRAs work with Pony to some extent, but Pony is considered a base model

    Larry_LePoopAug 28, 2024· 1 reaction

    So any Pony Lora, and SOME XL Loras will work?

    Jingshenwuyan222Aug 28, 2024· 1 reaction

    @Deerobouros  Pony base Lora, yes. XL based, depends, concept shoulbe be ok, characters may not work as expected. Embeddings may not work at all.

    TheCALSep 1, 2024

    @ZyloO So then you know when extension parameters ask you to go by certain size based on the model 512, 768, 1024 which size does this go by?

    Jingshenwuyan222Aug 28, 2024
    CivitAI

    I have tried more than 20 Pony checkpoints, this is the best model for NSFW content so far! Adhereing to most of the danbooru tags well. I am not sure why but it has limited knowledge to environment such as buildings, city etc. was had a hard time to generate acceptable backgrounds.

    SAC020Aug 30, 2024

    agree with both observations, great model, backgrounds are a challenge

    OmlerAug 30, 2024
    CivitAI

    У меня вопрос. Почему во многих негативных промтах используются оценки ниже 4? Ведь создатель оригинальной модели сказал, что из-за стоимости при обучении данные ниже 4 не использовались.

    oddKoodooOct 16, 2024· 1 reaction

    я думаю люди делают это по инерции, никто не читал детально что там создатель пишет, все просто копируют какой-то промт который все это начал))) ну а потом это зашло так далеко, что некоторые создатели моделей следуют этой идее

    MyszpiczAug 31, 2024
    CivitAI

    This model works pretty well in A1111, but in Forge it renders distorted faces. Changing prompt, sampler, scheduler, resolution, CFG, upscaler and various combinations of above do nothing. Any ideas what is the reason?

    FullMetal1111Sep 1, 2024
    CivitAI

    It's really good composition but face diversity is quite low compare to SDXL or SD. I mix with face detailer and SDXL model but texture is not inconsistency. Any suggestion to improve face variety?

    habibingSep 1, 2024
    CivitAI

    This model is absolutely messy, when using other than SDE++. So you can make very strong Lora from this model. And compatible with almost all Pony-series.

    zdwork662Sep 2, 2024· 1 reaction
    CivitAI

    Great model, generated characters and costume style are all very amazing, but does anyone have the same problem as me, generated characters often have semen on their face, it would be nice if there was an sfw version.

    ZyloO
    Author
    Sep 2, 2024

    To get SFW results use rating_safe in positive or rating_explicit in negative, or both

    zdwork662Sep 5, 2024· 1 reaction

    @ZyloO It worked. Thank you very much!

    zdwork662Sep 5, 2024

    @ZyloO May I also ask you, what prompt words can make the picture get rid of the CG picture texture and get closer to the feeling of real photography?

    ZyloO
    Author
    Sep 5, 2024

    @zdwork662 Take a look at my article for some tips, it may help you achive what you are looking for

    supertypSep 2, 2024· 1 reaction
    CivitAI

    I have some issues with this one: If I am using DPM++ 2M (Any scheduler), the backgrounds are the most detailed but the whole image becomes oversaturated, blurry and with strange artifacts looking like the colors are washed together. This doesn't happen on Euler but then the backrounds are less detailed.

    I tried different VAE, including none, and CFG, Steps.

    Any ideas?

    Edit: I saw in the gallery quite a few have similar issues with that sampler and this model (Main version).

    ZyloO
    Author
    Sep 2, 2024

    Read the model description.
    On site generation you need to use DPM2 a for more detailed outputs or Euler A

    supertypSep 2, 2024

    @ZyloO But the description says

    Recommended Parameters for Local Generation

    Samplers:

    Recommended:

    DPM++ SDE

    DPM++ 2S a Karras

    DPM++ SDE Karras

    DPM++ 2M SDE Karras

    DPM++ 2M SDE Exponential

    DPM2 a

    DPM++ 2S a

    DPM++ 3M *

    ZyloO
    Author
    Sep 2, 2024

    @supertyp Yes, thats for local generation.

    DPM++ 2M and DPM++ 2M SDE are not the same

    supertypSep 2, 2024

    @ZyloO But it happens with all DPM I tested all of them after I wrote it

    ZyloO
    Author
    Sep 2, 2024

    @supertyp Most people who experience distortion or artifacts are using the wrong sampler. If you’re still encountering these issues even with the recommended samplers, there may be something else wrong. Check if you have a VAE selected or if your steps/CFG are set too low

    supertypSep 4, 2024

    @ZyloO Yeah I checked the VAE, tried with none and the other VAEs. I don't have that problem on realistic models such as Everclear or Pianomix but these don't give a better quality than Pony Realism with Euler a. Thing is that with DPM++ Pony Realism produces really detailed backgrounds, plenty of leaves on bushes and trees and so on where other models produce quite simple backgrounds. However the whole image turns out to get these artifacts. I saw some examples in the gallery here that have the same problem. If you look for examples using DPM.

    Some SFW examples https://civitai.com/images/24179787 https://civitai.com/images/27683546 https://civitai.com/images/27676639 https://civitai.com/images/27539199
    I find these images much better in details but it seems the model has an affinity to create particles, oversaturated/contrast colors and oilcolor texture.

    ZyloO
    Author
    Sep 4, 2024

    @supertyp All those image examples are using the wrong sampler

    lnomsimSep 5, 2024

    @ZyloO  I have the same issues on local generation, default VAE, and the samplers you have listed.

    From time to time, I manage to get really good result (it's extremely rare), but most of the time, the backgrounds are overdetailed which results in a blurry mess, and the characters have weird artifact on their skin.

    I also use the recommended resolution (1280x768), recommended steps (30 max),etc...

    Edit: and of course, after writing this, the generation managed to create only good images without artifacts, I don't know what is causing this.

    supertypSep 5, 2024

    @lnomsim It seems the model just doesn't work well with DPM++ 2M and similar samplers. Which is a pity, because they generally produce better detail than the Eulers. It is not the only realism Pony model with this issue tho. My workaround now is to run first with DPM++ 2M and then HiresFix with Euler a. Preserves some detail, need to experiment with denoiser.

    jjroussSep 3, 2024· 1 reaction
    CivitAI

    Neat

    2397576Sep 4, 2024· 2 reactions
    CivitAI

    Stable Diffusion crashes when trying to load this model.

    16 Gb Ram

    1070Ti 8gb

    dragonalumniSep 11, 2024

    You probably have to stick to 1.5 with that rig.

    XPeiKDRSep 15, 2024

    Increase virtual memory to 64GB or more

    burnera679889Sep 20, 2024

    Youll need to enable low or med vram in the command line arguments if your using auto1111, it will run but it will take about a minuite or 2 per generation.

    nexyboyeSep 4, 2024· 2 reactions
    CivitAI

    using clip skip 2 made the outputs significantly worse, using the forward method of StableDiffusionXLPipeline.

    BelethianSep 4, 2024· 1 reaction
    CivitAI

    I try to generate an image using only the model with a resolution of 1080x1280 with sampling method DPM2 and Schedule Type Karras with 40 steps and it takes approximately 2 minutes with excellent results. However, if I try to use a Lora, the time goes up to 16 to 20 minutes. Can someone help me?

    I would like to be able to use a Lora from Ellen Joe from Zenless Zone Zero.

    This one to be more exact "https://civitai.com/models/386683/ellen-joe-zenless-zone-zero". Is it a compatibility issue? I tried to merge the model with Lora, but it didn't work.

    raphielleSep 9, 2024· 5 reactions

    brother in christ you dont need 40 steps

    oddKoodooSep 12, 2024· 1 reaction

    dpm2 is very slow. it can give good results at after less than 10 steps. author recommends to use SDE-samplers. like with "dpm++ 3m sde exponential" at 55 steps picture renders 30 sec, and loras don't influence that time

    BelethianOct 15, 2024

    @raphielle I started using comfyui and the time reduced a lot, even with 40 steps I get the same result at 1920x1920 resolution in around 5 to 6 minutes with a workflow where the image is generated at 1280x1280 and enlarged with img2img to 1.5x.

    I managed to solve the speed problem hahaha :D But thanks for the help :D

    BelethianOct 15, 2024

    @oddKoodoo I started using comfyui and the time reduced a lot, even with 40 steps I get the same result at 1920x1920 resolution in around 5 to 6 minutes with a workflow where the image is generated at 1280x1280 and enlarged with img2img to 1.5x.

    I managed to solve the speed problem hahaha :D But thanks for the help :D

    BelethianOct 15, 2024

    @TheCAL With Automatic1111 I was having problems with speed, today I'm using Comfy UI with a workflow that generates the image in 1280x1280 with 40 steps, 7 cfg, DPM Karras and then enlarges the image 1.5x to 1920x1920 with the same configuration but with 0.5 denoise, and the whole process usually takes 2 minutes.

    My problem now is being able to place more than one character per image, right now I'm trying to place Anna and Elsa from Frozen with different clothes, but without success... At least not with the clothes I want, they are mixing everything... I haven't been able to use BREAK yet. Although my prompt is quite complicated. If you can help me, I can share my discord.

    oddKoodooOct 16, 2024

    @Belethian there's extention for automatic1111 called "tiled duffusion", very popular. for comfyUI also should be a version of that. it helps with regional positioning of elements

    oddKoodooOct 16, 2024

    @TheCAL experiment with everything, for automatic1111 there's "script" tool, that allows to draw comparative grids of images generated on same seed with some different parameters - like sampler or scheduler. it's better to choose different sampler for different model and object on picture

    BelethianOct 16, 2024

    @oddKoodoo Thanks!!! Now I just need to learn how to use it hahaha I'll look for a tutorial ^^

    AverageDDanielOct 17, 2024

    @Belethian Hey what are your pc speccs and how did you fix your speed issues? My generation time switches alot, sometimes im done in 20 seconds sometimes it takes minutes to generate at 512x512 using euler a but the results are usually nice.
    Ty in advance

    BelethianOct 17, 2024

    @jaggy700 I currently have an I7 9700Kf, 32gb ram and a 12gb RTX 3060. About what I did to solve the speed problem was just swapping the a1111 for comfyui. Right now I have a queue with 3 images, not simultaneously, but using comfyui I can set up a workflow with 3 different prompts. Two of them with 1080x1280 images and one with 1280x1280, but all 3 are enlarged by 1.5x, in the final result I have 2 images of 1616x1920 and one of 1920x1920.

    But in the process, the image is created, then it is enlarged by 1.5x and as soon as the result comes out, you go through an img2img process with the same prompt but denoise of 0.5 to correct some errors and add more details. A refinement. The entire process usually takes approximately 8 minutes. Ah, all stages of the image go through 40 steps, both after enlargement and initial creation

    oddKoodooOct 17, 2024

    @Belethian well, that's a nice workflow, but I really don't believe that comfyUI gives any benefits in speed. I think if you really compare a1111 and comfyUI at same settings them would be equal

    BelethianOct 18, 2024

    @oddKoodoo On the a1111 I was using it without lora, any lora I added would take the process from 2 minutes to 18. And on the A1111 I couldn't use hiresfix, for the same reason as lora, the process took much longer. I had to generate the image at a lower resolution, send it to img2img and then enlarge it, but not much.

    The whole process, in the end, took about 30 minutes per image.

    I don't know what changed, but the settings are the same, dpm2 to karras, cfg 7, 40 steps, 1280x1280 or 1080x1280. The difference is that I can make 3 images with a single click ^^ each one with a different theme and prompt.

    I don't understand programming, on this subject I'm a real noobie. I don't know why on comfyui I can do it faster and still use the PC without many problems, I watch videos, I can even broadcast the screen on discord without many problems...

    oddKoodooOct 18, 2024· 1 reaction

    @Belethian wow, the timing you've described for a1111 is really insane. for me it happens much faster, and I find a111 just more comfortable and detailed. mb you were using some wrong settings somehow or very early and raw version of a1111... however - if you're satisfied with your current comfyUI setup all is good <3

    AverageDDanielOct 18, 2024

    @Belethian thats the thing i dont get aswell. While using a1111 i get very big swings in time. Sometimes it takes me 5 seconds to generate 512x512 image and the next time it takes me 5 minutes without changing anything.

    Yesterday i played around abit with token merging and my times are more consistent with around 20 seconds for a 1024 x768 image.

    But upscaling still takes very long.

    Do you have any tips for speeding up upscaling?

    Im pretty noobie aswell when it comes to programming

    oddKoodooOct 18, 2024

    @jaggy700 I think a1111 strangely works with VRAM alocation, it helps to keep less applications lauched

    AverageDDanielOct 18, 2024

    @oddKoodoo it just doesnt make sense to me. My taskmanager shows nothing is at max capacity and it still takes very long sometimes.
    Are you using A1111?
    Which upscaler do you use?

    oddKoodooOct 18, 2024

    @jaggy700 upscaler-model by itself isn't important, matters the method a1111 uses to upscale, and it's standard. there's common problem with memory allocation. like - an app needs to operate another 250mb of mem, and it's reserving for itself about 4gb of mem - just in case, you know, why not? and on that moment it can crush with error "out of memory" or just work MUCH slower. relaunch of app helps, and just don't use simultaneously several memory-hungry apps

    oddKoodooOct 18, 2024

    @jaggy700 and try alternative upscale-methods - "tiled diffusion" extension, controlNet tile and others alike

    AverageDDanielOct 18, 2024

    @oddKoodoo will try ty for the tips :)

    MintBerryCrunchSep 5, 2024· 3 reactions
    CivitAI

    my favorite

    wormlordSep 8, 2024· 2 reactions
    CivitAI

    Good job, keep it up

    Sara_Sep 10, 2024· 5 reactions
    CivitAI

    Good model overall, could still recommend Euler A for local generation also, the other ones seem to go a little too crazy on the backgrounds and add clutter, at least in my experience.

    HieheiheiSep 10, 2024· 1 reaction
    CivitAI

    How do you prompt a proper thigh gap? I want big distance between the thighs, it's so damn sexy you know. There's this tendency of th3 thighs being together all the time. (Thigh gap:1.3) doesn't really do it

    recycleaway7807Sep 11, 2024· 2 reactions
    CivitAI

    I love this model but I'm hoping there's a good option available to speed it up without sacrificing quality. I've tried the lightning options but there was a pretty serious quality drop. Also tried LCM Lora but was underwhelmed. Any secrets to reducing step count?

    TooLittleVRAMSep 17, 2024

    What UI do you use to generate images? A1111 is super slow (>10 minutes?) on my 8GB 3070 laptop, while Forge or ComfyUI can generate an image in 20-90 seconds depending on the sampler you use and LoRA usage.

    fronyaxSep 17, 2024

    Forget LCM.

    use Hyper SD lora, start with 8 steps for sdxl with 0.5 weight and CFG 1,worked flawlessly on pony

    https://huggingface.co/ByteDance/Hyper-SD/tree/main

    Ditch A1111 those thing is fill with OOM error,lol.

    Use sdwebui Forge or SwarmUI.

    kuaubsegrf8Sep 17, 2024· 1 reaction
    CivitAI

    Any recommendations for avoiding a shallow depth of focus so I don't have blurry backgrounds? I've tried "sharp focus", "deep depth of focus", "shallow depth of focus" in in negatives, etc.

    nvSep 17, 2024

    try putting "plain background" in negatives

    skyeveSep 19, 2024

    I know that you said you don't want blurry backgrounds, but have you tried including "(bokeh:0.4)" in your prompt? You might be able to scale that up or down as needed depending on how much blur you want of the background. As I understand it, shallow depth is pretty much the oppose of bokeh. Unless you're talking about the concept where parallel lines converge like looking down a long hallway.

    noosprog80Sep 18, 2024· 4 reactions
    CivitAI

    Is there or it’s possible to make a ONNX quantized 8bit version of this fantastic model?

    Libra_AiSep 19, 2024· 3 reactions
    CivitAI

    Hiya! Absolutely love the details I get from this! I am just wondering why every prompt I put (no matter the race or body type) always gives me the same face. is this common? Or am I doing something wrong?

    djwashoutSep 25, 2024

    the face issue is common. I use (beautiful Asian female:1.8) but have to increase the weight as i add other details.

    Heck3Sep 20, 2024· 5 reactions
    CivitAI

    I always have a problem that the eyes are completey blured and unsymetrical. Fix?

    wodz30Sep 24, 2024

    Check my comment. This is due to using the wrong sampler. I had the same issue. Now the faces/eyes are literally perfect.

    RobratSep 22, 2024· 2 reactions
    CivitAI

    So I tried the inpaint version. "v2.1 inpainting1.1", and it's bugged as all hell for me. As far as I can tell it acts as if I were using the base non-inpaint model. it basically tries to "draw" a new picture inside the inpainted area based of whichever prompts you had, instead of changing the inpaint area as it's supposed.

    It also takes about 3 times longer to complete an image compared to other checkpoints.

    So yeah, that's my experience.

    molotomorrowOct 1, 2024

    Same issue happening to me

    wodz30Sep 24, 2024· 13 reactions
    CivitAI

    READ. THE. INSTRUCTIONS!!!!!

    Like most of you I started using this checkpoint and had crazy results. This is because you need to dial in the settings for this specific checkpoint! Once you actually do that the results are fantastic! That would clear up most of the issues I see here in the comments. My best settings were

    cfg 6.5, dpm++2m SDE+Expo, base 20, refiner 5@1024x768

    Pairing this with "Detail Tweaker XL" at strength 3 provides great results! Aside from that it works best with pony loras.

    Dial in your config and all of your "noise" issues vanish

    HakureiRyougiSep 24, 2024

    Thank you. I was getting horrible results till I read the description and your comment.

    ColeroSep 27, 2024· 1 reaction

    by refiner, do you mean upscaler? Where would i find refiner 5? Thank you

    wodz30Sep 27, 2024

    @Colero apologies that is for local renders. This is the most used refiner for SDXL/Pony - https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0 when the generation process gets X amount through it switches to a different checkpoint to refine details and slightly alter the output. This can be defined even if you are using a cloud Stable Diffusion Forge or ComfyUI instance :)

    QHvI7vwtWDSep 27, 2024

    @wodz30 Hello, if you use ComfyUI, could you please share your workflow for reference?

    wodz30Sep 28, 2024

    @QHvI7vwtWD sure thing! I have upload the json here for anyone to use - https://file.io/VgbeIc2qAyUn

    wodz30Sep 28, 2024

    @QHvI7vwtWD Here is my img2img workflow as well, I keep these separate - https://file.io/xA2874jH9Byx

    QHvI7vwtWDSep 28, 2024

    @wodz30 Thanks, it looks like the KSampler node you're using doesn't load for me, and it doesn't show up as missing in the Manager. Apparently most people don't use the refiner anymore? I'm kinda curious but not sure how to set it up

    wodz30Sep 28, 2024

    @QHvI7vwtWD I believe it might be the ComfyUI_Fooocus_KSampler the refiner seems to make a better difference for pony rendering compared to latent upscaling

    Ks4sSep 29, 2024

    @wodz30 could you reup pls

    NooobleOct 1, 2024

    Was going mad about it, tried to replicate settings as in the description and still not getting the results I was looking for. Even passing the images through my sd1.5 detailing and upscaling mad such a difference in making them pop! Will give your flow a shot later. I see the flows are deleted by now 😢

    wodz30Oct 2, 2024· 2 reactions

    text2img workflow @Kadse https://easyupload.io/db0h9i link is good for 30 days :)

    BubmessOct 28, 2024· 1 reaction
    djwashoutSep 25, 2024· 1 reaction
    CivitAI

    Any good realistic embedding for this model to eliminate (extra navel)

    TheCALSep 25, 2024

    Happens when you're denoise is too high. But extra belly button is actually really easy to fix in post processing. gimp/PS

    alternative_UniverseSep 26, 2024· 5 reactions
    CivitAI

    Just imagine the huge impact of a version 2.2, that would be crazy

    ZyloO
    Author
    Oct 1, 2024· 1 reaction

    👀

    @ZyloO a new version of pony realism would hit hard🤪

    Cheemz99Oct 2, 2024

    Yo, the new model is definitely a step up overall, no doubt. But honestly, some of the little things that got changed kinda threw me off. There were a few details in the old version that I was vibing with more, and now it just feels a bit off in spots. Don’t get me wrong, it’s solid, but I miss those touches from the last version.

    SavgsCurvOct 2, 2024

    @Cheemz99 which new model?

    Cheemz99Oct 2, 2024

    @SavgsCurv I’m not talking about ponyrealism, I've messed with other models before.

    windinchesterOct 2, 2024
    CivitAI

    good

    Checkpoint
    Pony

    Details

    Downloads
    13,141
    Platform
    CivitAI
    Platform Status
    Available
    Created
    6/30/2024
    Updated
    5/15/2026
    Deleted
    -

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.