Pro Version of Realism Pony V4-V5-V6 now Available on My Patreon
Onsite generations are permanently available on these models:
š Realism_By_Stable_Yogi V3: https://civarchive.com/models/166609?modelVersionId=992946
To keep this model available for Image generation on Civitai, please place your bids here and Iāll make it available for onsite generation:
https://civarchive.com/auctions/featured-checkpoints
Join me on Patreon for exclusive perks and early access to unique resources.
To discuss custom LoRa's or models, feel free to connect on Discord.
š Like this model to keep me motivated and inspired to create more!
š¬ Drop a comment and let me know what you'd love to see next.
š Review this model to help me improve and make even better creations.
š Hit that notification bell to stay updated with my latest models and updates!
Important Usage Tips
Add Stable_Yogis_PDXL_Positives at the beginning of your prompt section.
Add Stable_Yogis_PDXL_Negatives-neg at the beginning of your negative prompt section.
Description
REALISM_PONY_V6_VAE_BY_STABLE_YOGI
(FP16) Default Model can be used on all graphic cards.
Built on V3āV5.
V6 pushes realism, skin micro-texture, and dynamic lighting while staying fast and consistent.
Quickstart (Defaults)
Sampler: DPM++ SDE Karras (default)
Steps: 27 (min 20)
CFG: 4 (works great 3ā7)
Resolution: All SDXL
ADetailer: Recommended
High-Res Fix: Optional (for large outputs)
Denoise: 0.30
Hi-res steps: ā„ 5
Upscale: ā„ 1.5Ć with 4x-UltraSharp
VAE: Auto (included)
Compatible Samplers: DPM++ 2M SDE, Euler a, DPM2 a (use your preferred from previous versions).
(Download this negative embedding for best results) Stable_Yogis_PDXL_Negatives
Positive Prompts
(Download this positive embedding for best results) Stable_Yogis_PDXL_Positives
What is FP16 ?
Standard build, runs on almost all modern GPUs.
Balanced for compatibility, stability, and detail.
Recommended for most users.
FAQ
Comments (43)
New toy!
Figured after Babes v6, Realism v6 may/should be coming. Also nice job with the extra info.
I can NOT get this model to do anal. Is it just not trained on it? I occasionally get a pic that looks like it is attempting it, but it's never great. Other than that, the model is awesome.
Seems to work fine for me.
Without seeing any examples of what you're trying to do, I can only guess, but maybe you're using LoRA, and/or prompts that conflict with your goal of producing an anal sex scene. You may also be trying to use the wrong prompt, as for instance could be describing it as "anal cowgirl sex" when you should have used "anal cowboy sex" instead.
it's still Pony Pony Pony.. fuck Pony
Share us a better checkpoint then if this is not good
I might agree to a degree. I love it, but it still has a few "pony quarks" that ultimately leave me wanting something else. What that is, IDK. Base XL is too much work to lol, NoobAi MIGHT be onto something, but eh. Even then there are still too many holes in what all the "at home" stuff is capable of. This one makes sense at least
@jumpforjoyĀ I doubt it's about "not being good" as much as "Pony has it's perks but not much else"
@D3monCepsĀ what do you mean with "still too many holes in what all the "at home" stuff" which home model do you refer to?
@pink0909Ā Many! Some basic actions and poses and such are lost concepts to most models, from SD1x, top Pony, to many others here. Most seem to be either "portrait studio 1 million" "AWA generator 10 quadrillion bajilliion" or some odd mix of the 2. We shouldn't be forced into controlnets for a MAYBE just to attempt achieving pretty normal and mundane human poses and action, yet this "far" along, that's exactly how it still is. For example armwrestling, tug-of-war, and don't your dare try driving. The mega corps and all their censoring don't even flinch when you ask for some action. this stuff... lol.
@D3monCeps megacorps are lame thats why local generation is king, i was suprised that on huggingface a huge community is uncensoring LLMs (the ones Megacoprs build in anyoing supernanny "guidelines") the best ones are called abliterated. Real good. Only Downside you need a Gpu with 10-12gb Vram at least (8gb and 4gb can work but with significant drawback ind Modelsize and speed)
@pink0909Ā The nanny stuff is awful. Sadly, they can still do stuff that the local stuff simply can't (or requires a blood and silicon sacrifice to every deity in existence to give you something the MIGHT be inpaint-able) You'd think with all these models here, one of these geniuses would have figured out how to replicate what the big leagues have done, minus the sensor-fu. I'd like to think I just haven't run into the "right" models for my whims, but we'll see. Will have training capabilities eventually to attempt plugging the holes (probably wont, because if it aint NSFW for the sake of it, just lol
@D3monCepsĀ well it depends how much you ask for, i am a seasoned guy starting with 368-Pcs and win3.11 with Paint xD. Then i basicly bought every PC generation (build my self since prebuild PCs are overpriced). I hopped between AMD and Intel depending who had the price/performance advantage in that year. Back in the day Selecting stuff for "inpainting" was a real hussle, you basicly had to become a "photoshop" master just to make semi real fakes (or work with Photoshop in general). These days We have now Segment-Anything, and even automatic lighting correction, Depth-Estimation,Pose estimation, Logogeneration, Video and more - all with a click of a button (Each one of them is a Huge Step Forward). I remember a >25 year old meme(called Demotivals back then) image joking a programm could atomaticly make nudes with the MagicWandTool (select all tool). I am really happy since i get high tech (Stable Diffusion) for old Foto restauration (analog + digital), which is close to magic since it goes beyond restauration it can redraw a crappy QVGA-Image into HighRes-Quality (sure you need a Few click more than one xD but heck its possible!). So maybe changing your POV might make you happier :) .
i'm using this with comfyui. most checkpoints have a preferred steps, cfg, sampler, scheduler and even preferred resolution. does this one have something like those?
Pls read about this model section.
@Stable_YogiĀ thanks. never knew about that section. i'm used to the info being under the preview images.
Very good realistic model, thanks for sharing and making it free to use.
very good
wow! Amazing, good job
Wow!
very good
nice!!
Nice checkpoint! I've finally found an upgrade for Goddess of Realism, which was getting too grainy and contrasty.
Question: Any way to stop the depth of field blurring? I've tried blurry, depth of field in my negative with only moderate success.
someone already pasted the answer to that in this comment section, son I am just going to quote
"You need quite a bit of negative:
blurry, blur, motion blur, depth of field, shallow depth of field, bokeh, glow, soft background, out of focus, soft focus,"
@pleaserphotography236Ā That's literally my comment.
@Electroverted that“s why is quoted word by word in quotation marks and specified that it is a quote from someone, and not mine :)
ŃŠµŠ°Š»ŠøŠ·Š¼
I solved the depth of field blur on my own (+ ChatGPT). You need quite a bit of negative:
blurry, blur, motion blur, depth of field, shallow depth of field, bokeh, glow, soft background, out of focus, soft focus,
Now that's like a real gentleman, your experiments and handwork is much appreciated. š
As someone making the models, LoRA, and such people use, what affect does these new changes on this site have on you, and what you're doing?
š A Message to Everyone About the Recent CivitAI Changes
I understand that many of you are frustrated with the recent NSFW restrictions and Buzz system changes ā itās not easy to see sudden shifts in a platform we all helped grow.
As someone who creates the models, LoRAs, and tools that power this community, I want to say: Iāll keep doing what I do. Creativity doesnāt stop here.
Yes, things are changing ā but NSFW art and expression will always find a way. If one platform limits it, others will open doors. Thatās how creativity evolves.
My suggestion:
š Keep creating.
š Adapt and find your balance.
š Donāt let a policy change kill your motivation.
Itās not just about Buzz or rules ā itās about being part of something revolutionary thatās redefining art and freedom. Stay inspired. š«
@Stable_YogiĀ
Thanks for responding, but I wasn't asking about me, I was asking about you.
For me, I do basically all my AI related stuff on my own computer, so as the username I have suggests, I'm here to get models, LoRA, and such for me to use. These changes directly don't have much impact on me as a result, although if they were to really screw with what you do, and why, then people like you may stop making the models, LoRA, and such we use.
That is what I was asking, it's good to hear you at least currently intend to keep at it, although what is the impact on you, are you doing this for fun, so whatever, or is this a means to make money, so would it really impact that for you by your estimate?
@AImodels4me2useĀ I do this because I enjoy it.
Iām free to create what I want, and for now, Iām going full steam ahead.
The futureās always uncertain, but for at least the next year ā Iām not slowing down.
where can we find models, lora that they censor on here now?
Like
Using V6 alt - Despite the gallery being all 1girl, this checkpoint is actually incredible at making grass and nature in general, even coral reefs. I didn't even know SDXL could do landscapes that good (specially Pony).
My fav model. V3 and V5. If you want acurate ethnic Asian face use a detailer with v5XLFP16 model. The SDXL model makes the best Asian faces.
Hello, i am new to AI and i dont understand which version i need. what are those differences V3_VAE FP16, BF16 V5_XL and so on?
V3, means version three, V5, V4, and so on mean the same. Yes, this means an old version, V3 is still very popular, I like V6 more, but try them yourself if you're doing this on your own computer, and use what you like. Some versions of a model, and some models will do some things, better than others, so you may end up using a collection of them, for different types of things.
VAE indicates it has a built in VAE, which is fairly common with various models. VAE stands for variational auto encoder, feel free to look up info on them, but basically you would want to be using one when generating images.
Stable Yogi started putting info on what the other parts are more clearly in the "about" section in the column to the right, under where you would find the download link, as well as other info, so you should look that over. Although:
FP16: Basic version.
BF16: For use with higher end graphics cards.
DMD2: The model was merged with a hyper, lightning, or whatever you want to call it LoRA, this lets you generate images with less steps, and therefore faster, but you lose some quality as a result.
@AImodels4me2useĀ
i was totally confused with all the v3_vae fp16 bf16 dmd2 stuff and now i finally get it lol especially the fp16/bf16 thing, you saved me dude, seriously thanks a ton!!! šš
Adding maybe useful addition:
V3: Version 3, increasing version is newer, now V6
VAE: Image encoder and decoder, needed for image read => i2i or i2v image gets encoded to latent. Needed for image write => t2i and i2i, or t2v and i2v, when decoding from latent to image or video.
"VAE baked" means the VAE was embedded within the model, so if the baked comment is missing, you need an additional VAE (sdxl_vae). Without VAE you will get black images, with wrong applied external VAE similar. Care for settings.
fp16, bf16, fp32, fp8 describing "float point in bit" accuracy. fp8 < fp16 < fp32 regarding versatility, quality, precision. The higher the bit count, the more compute power (GPU/VRAM) and inference time is needed. The bf16 model though is a different breed, mostly used for training, the least "tamed model version". Similar quality as fp32.
Rule of thumb: fp32 => 24BG VRAM, and fp16 => 16GB VRAM, fp8 => 8BG VRAM (highly depending on model and skill).
DMD, lightning very compressed model versions with specialized schedulers, fewer inference steps, but less quality and variety in "single model only" use. Also used for inpainting or multistep upscaling, for increasing(!) quality of a given image.
Q6, Q8 if you encounter those models this means "quantized in bit". Those are "reduced" models, resulting in lesser load on VRAM and GPU. Mostly comes along in GGUF not safetensor format. The smaller the quantization Q2 < Q4 < Q6 < Q8, the lesser accuracy, the more error from original fp32 model. Q8 model has ~ <2% error rate ("pretty good"), Q6 ~ <10% error, and so on.
DMD, fp8 and Q models are mostly needed for smaller consumer rigs (laptop/low vram), POC prototyping, for new concept, upcale and inpaint - when there is no need for high precision (fp16 / fp32). fp8 models are coming strong, but only new gen stuff end 2025 beginning 2026 (due to cuda design decisions. ngl though, f*** NGreedia, f*** OpenAI).
superb
This PONY V3 VAE is absolute dope !
Thanks Yogi !
ā¤ļø















