- Download 1.2i12 as digital product or with Membership March 2026 models collection.
- Download 1.2p11 as digital product or with Membership February 2026 models collection.
- Download 1.2p10 as digital product or with Membership May 2025 models collection.
- Download 1.2p09 as digital product or with Membership May 2025 models collection.
- Download 1.2p08 as digital product or with Membership May 2025 models collection.
- Download 1.2p07 as digital product or with Membership May 2025 models collection.
- Download 1.2p06 as digital product or with Membership May 2025 models collection.
- Download 1.2p05 as digital product or with Membership May 2025 models collection.
- Download 1.2p04 as digital product or with Membership May 2025 models collection.
- Download 1.2p03 as digital product or with Membership May 2025 models collection.
- Download 1.2p02 as digital product or with Membership May 2025 models collection.
- Download 1.2p01 as digital product or with Membership May 2025 models collection.
- Download 1.2p00 as digital product or with Membership May 2025 models collection.
- More cool models.
Visually, Babes 1.2i12 delivers radiant, high-fidelity portraits with luminous skin tones, softly sculpted features, and a gentle yet captivating presence. Colors bloom with rich, balanced saturation - jewel-like hues in hair and attire dance alongside natural, flattering lighting that creates depth without harshness. Linework is crisp yet graceful, backgrounds gain subtle storytelling detail, and overall compositions feel more cinematic and cohesive. Faces carry a refined, almost ethereal charm: expressive eyes, harmonious proportions, and an inviting warmth that draws the viewer in.
This version captures the essence of timeless elegance reimagined through modern digital artistry - glamorous, vibrant, and effortlessly stylish, yet always tasteful and sophisticated. Whether crafting fashion-forward concepts, fantasy-inspired scenes, or serene character studies, Babes 1.2i12 offers a versatile, premium aesthetic that feels both luxurious and alive.
ℹ️ Babes 1.2p11 Pony - This version introduces several refinements. A significantly expanded color range brings richer saturation, cleaner white balance, and more accurate, natural hues across a broader spectrum of shades. Shapes and linework appear sharper and more defined, enhancing clarity without sacrificing softness. Backgrounds receive greater attention, with increased depth and fine detail that elevate overall composition. The semi-realistic style is now more consistent throughout, and facial features lean subtly cuter - balanced, expressive, and more harmonized with the overall aesthetic.
How it works?
Choose Product or Subscribe:
Get this model via a Ko-fi product, or
Join the appropriate membership tier:
Model released during current month: Current Month Content
If model released in previous months: All Months Content
You will be prompted to link your Hugging Face and Ko-fi emails.
Hugging Face access is checked by email, which might be different from your Ko-fi email. You will receive a link from Ko-fi to add the email and continue.Request Access on Hugging Face:
After you acquired product or subscription, you’ll receive a link to a gated Hugging Face repository.Click "Request Access".
Your access will be approved automatically within a minute.
Download & Use:
You can then download and use the model under the license provided in the repository (LICENSE.md).
Why I Created My Own Access Control
I have been a creator on Civitai since December 2022. My goal has always been to help make model training, design, and development a sustainable, professional activity, not only for myself, but for other creators too. Supporting people who want to create models professionally is good for users, for creators, for the entire community, including Civitai itself.
Civitai has shared some of this vision and has tried to create ways to support creators. Unfortunately, their current Creators Program has serious problems:
Civitai was deplatformed by payment processors and operates at a loss.
The current compensation model pays creators mainly for model usage in the generator.
Early Access limits paid access to only 15 days, after which models must be free.
Buzz extraction value has dropped to ~45% of its original worth, making withdrawals basically a 55% fee.
Bugs and creator feedback are ignored for months, and several team members who used to listen have left.
I believe this system is unfair, harmful, and exploitative. Creators should be able to:
Sell models without arbitrary time limits.
Pay a fair platform fee of 15% or less.
Civitai briefly tested a much better system called Clubs in early 2024, which worked well for creators, but it was canceled after backlash from a loud minority that opposed fair creator compensation. I have long hoped Civitai would fix these issues, as I’ve suggested in my articles:
But nothing has changed. Bugs remain unresolved, feedback is ignored, and I’ve lost optimism about Civitai’s willingness to improve.
That’s why I’ve decided to experiment with my own alternative access control system, a fairer way to share models with users who value my work.
ℹ️ Babes 1.2p10 Pony - This version introduces a few interesting changes, continuing my effort to refine and evolve Babes with every version. In this version, I focused on adding a few hand gestures, such as: hand on chest, hand v sign, hand open palm, hand thumb up, hand pointing up, and both hands on waist. These gestures are significantly better now, but they are definitely far from perfect.
I also tried to improve the teeth, aiming for more defined, clear details and more consistent results, and I think that effort was successful.
This version includes a few adjustments to the semi-realistic base style and default faces. My goal was to make the model feel more rounded and distinct, while staying aligned with the established look of earlier versions. I feel that the fidelity remains very high, and I’m pleased with the direction the model is heading.
As always, each release is an opportunity for me to add creative flavor to the model. I’ve continued to refine the model and hope to be able to keep making frequent updates, while Civitai is going through some worrying changes.
I really love what the model can do now, and I hope that people will enjoy this latest addition.
ℹ️ Renamed versions: 1.2-1.29 Pony versions changed to 1.2p00-1.2p09 or 1.2p0-1.2p9.
ℹ️ Babes 1.2p09 Pony – A new training and a few new components added, aiming to polish and improve semi-realistic style further, adjustment to faces and skin, a little bit more details, enhancement of sharpness and contrast.
ℹ️ Babes 1.2p08 Pony – Incorporated a few new components and new training, sharpened backgrounds, added more details, slicker 3D effects, expanded tonal range, and lowered close-up bias. This version can create truly incredible images—it's my pleasure and privilege to be able to create and publish it.
ℹ️ Babes 1.2p07 Pony – This version comes with enhanced semi-realistic base style, adjustment to default faces, and improvement to saturation and lighting.
ℹ️ Babes 1.2p06 Pony – This version brings significant improvements to details and sharpness, along with major enhancements to compositions and backgrounds. For a more alternative look, I recommend using "tattoos" in the prompt. I have to admit, though, that hands are more challenging in this version.
⚠️ This model has been on Civitai’s generator since the first day of generator existence. However, after the recent attempt to support all checkpoints in the generator did not work as expected, Civitai has decided to stop supporting this model in the generator. I’m disappointed by this decision—this is my first and most beloved model.
Ultimately, during the period when all models were briefly available in the generator, neither of the Babes versions met some required threshold. Maybe this happened because the latest version was in Early Access, splitting users between two versions—I don’t know.
I checked the data from this model for exact numbers. I also found at least four models that are currently in the generator, which have a lower number of images generated with them than versions of this model (v1.2p04, v1.2p05) - despite being measured over a shorter period of time. So unfortunately, the decision by Civitai is not based on merit, unlike how it was presented.
Civitai promised that the new Creators Program would be more fair, but after reverting changes to support all checkpoints, we now see that two creators hold around 20 checkpoints in the generator—meaning just two people have 10% of all available slots—including a single model with six versions. Other models have three or even four versions in the generator. This situation is an unfair for other creators and not helpful to users.
ℹ️ Babes 1.2p05 Pony - This version looks amazing. It creates gorgeous faces and bodies, and overall, I like it a lot. Some of the changes include: less shiny skin, rebalanced contrast with renormalized colors, better proportions, and more realistic lighting.
ℹ️ Babes 1.2p04 Pony - Additional polishing of base style, look and feel. Adjustment to more volumetric and less flat shading, slightly more metallic textures, can produce more darker images, has a wider variety of colors - particularly for clothes and hair.
🍊 I want to thank all the people who are using and enjoying this model, all my other models, and supporting my work. As a token of appreciation to all the lovely people, I decided to discount Babes 1.2p03 Early Access bigly and boldly. Hope y’all are having a great 2025. Let’s Make AI Great Again!
ℹ️ Babes 1.2p03 Pony - Adjustment to base style, default faces, more vivid colors and more colorful images. I love the results, somehow I keep surprising myself all the time. 😆
ℹ️ Babes 1.2p02 Pony - Incorporating new components, including training from Galena 1.3 and Babes Kissable Lips 3.7, and additional new finetuning. The base style is a continuation of Babes 1.2p00 and 1.2p01 with a slight variation. The results look amazing in my humble opinion. 😊
ℹ️ Babes 1.2p01 Pony - Changes to the base style of Babes 1.2p00, intended to offer additional complementary options.
ℹ️ Babes 1.2p0 Pony - Finetuning based on Babes Kissable Lips 3.4.
ℹ️ Babes 3.1 includes new training, a new recipe, rebalanced styles, and enhancements to the base style.
ℹ️ Babes 3.0 was trained with 43,000 images, including many new styles and subjects.
ℹ️ Styles: Babes 1.1 - "basety style", Babes 2 - "abbe bi style", Sassy Girls - "sassy style", Midjourney - "midjourney style".
ℹ️ Cartoon styles:
othalama style, ronidu style, seviechan style, samdoesart style, thepit style, owler style, cherrmous style, arosen style, uodenim style, stanleylau style, amime styleℹ️ Recommended tags in the negative prompt: nose stud, piercing
ℹ️ Misc trigger words: wild nature, suicidegirl, interior design, digital art
📌 Are your results not 100% identical to any specific picture?
Make sure to use Hires-fix from example SwinIR_4x / 4x-UltraSharp / 4x-AnimeSharp / RealESRGAN_x4plus_anime_6B (Upscaler Download), it is what I usually use for hires-fix.
SD 1.5 - Use VAE: vae-ft-mse-840000-ema-pruned for better colors. Download it into "stable-diffusion-webui/models/VAE" folder. Select it in the settings.
I use xformers - it's a small performance improvement that might change the results. It is not a must to have and can be hard to install. Can be enabled with a command argument "--xformers" when launching WebUI.
WebUI is updated constantly with some changes that influence image generation. Many times technological progress is prioritized over backward compatibility.
Hardware differences may influence changes. I've heard that a bunch of people tested the same prompt with the same settings, and the results weren't identical.
I have seen on my own system, that when running as part of a batch, may change a little bit the results.
I suspect there are hidden variables inside modules we can't change that produce slightly different results due to internal state changes.
Any change in image dimension, steps, sampler, prompt, and many other things, can cause small or huge differences in results.
📌 Do you really want to get the exact result from the image? There are a few things that you can do, and possibly get even better results.
Make a single word changes to prompt/negative prompt and test, and push it slowly to your desired direction.
If the image has too much of something or doesn't have enough of something, try to use emphasis. For example, too glossy? use "(glossy:0.8)", or less, or remove it from the prompt, or add it to the negative. Want more, use values 1.1-1.4, then additional descriptors in the same direction.
Use variations - use the same seed, and to the right of the seed check "Extra". Set "Variation strength" to a low value of 0.05, generate a few images, and watch how big the changes are. Increase if you want more changes, and reduce if you want fewer changes. That way you can generate a huge amount of images that are very similar to the original, but some of them will be even better.
📌 Recommendations to improve your results:
SD 1.5 - Use VAE for better colors and details. You can use VAE that comes with the model or download "vae-ft-mse-840000-ema-pruned from (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) , ckpt or safetensors file into "stable-diffusion-webui/models/VAE" folder. In the settings find "SD VAE", refresh it, and select "vae-ft-mse-840000-ema-pruned"(or the version included with the model). Click "Apply settings" button on the top. The VAE that comes with the model is "vae-ft-mse-840000-ema-pruned", you don't need both, use the one that you downloaded, it will work very well with most of the other models too.
Use hires-fix, SwinIR_4x / 4x-UltraSharp / 4x-AnimeSharp / RealESRGAN_x4plus_anime_6B (Upscaler Download), first pass around 512x512, second above 960x960, and keep the ratio between the two passes the same if possible.
Use negatives, but not too much. Add them when you see something you don't like.
Use CFG 7.5 or lower, with heavy prompts, that use many emphases and are long, you can go as low as 3.5. And generally try to minimize the usage of emphasis, you can just put the more important things at the begging of the prompt. If everything is important, just don't use emphasis at all.
Make changes cautiously, changes made at the beginning of the prompt have more influence. So every concept can throw your results drastically.
Read and use the manual (https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features).
Learn from others, copy prompts from images that look good, and play with them.
DPM++ 2M Karras is the sampler of choice for many people, including me. 40 steps are plenty, and I usually use 20.
Discord server for help, sharing, show-offs, experiments, and challenges
Description
Using older VAE.
Inspired by SamDoesSexy Blend.
Influenced by: SDHero-Bimbo-Bondage, Pit Bimbo, Analog Diffusion, Dreamlike Diffusion, Redshift Diffusion.
Core influence: MidJourney v4, Studio Ghibli, CopeSeetheMald v2, F222,SXD 0.8.
Notice: If you see skin artifacts and noise, please add "freckles" to the negative prompt. If you want freckles, write in your positive prompt: "(freckles:0.7)", values under 0.8 seem to produce normal freckles in my tests.
I'm using VAE: vae-ft-mse-840000-ema-pruned.
Upscaling used for hires-fix: SwinIR_4x with "Upscale latent space image when doing hires. fix".
No hypernetworks.
xformers - small performance improvement that might change the results, not must to have.
FAQ
Comments (79)
Looks great! I'm getting pretty good results too but not exactly like yours. Are you using a VAE or hypernetwork you didn't mention?
VAE: vae-ft-mse-840000-ema-pruned.vae.
No hypernetwork.
Also I'm using xformers, so it might effect the results.
Possibly upscaling difference, I use: SwinIR_4x with "Upscale latent space image when doing hires. fix".
But if you want to adjust the results for your taste, just tweak the prompt and use img2img with more variations.
Great! VAE was the thing. Already had it. I'm surprised PNG info didn't autoload it. I use xformers too. What a godsend. I'll keep in mind that upscaler. I normally stick to photo realistic but playing with your model that would be best. Thanks.
It might be best to upload pre-upscaled. It might keep them png. But people can change .jpeg to .png when saving and, crazily, that works to pull PNG info.
The upscaling is done automatically by hires-fix, and it doesn't save the smaller version, and if you don't use hires-fix the image can be deformed. Strange that they don't add it to png info, probably just forgot about it.
Oooohh yeah. I didn't know there was upscaling involved in hires fix. I just thought it was two stage render. Very interesting. Thanks.
Where can I get this vae: vae-ft-mse-840000-ema-pruned.vae?
Any chance you can attach the VAE that you used?
It's: vae-ft-mse-840000-ema-pruned.vae.
Do you have it?
Anyway, if you don't have it, it's here: https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main.
I’ll grab it, thanks.
https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cf074d90-8834-4e10-82da-db08d4a34500/width=896
did you edit this image a bit? because my result is slightly different, it could just be video card differences, I used the same VAE as you and also tested for a while any variation that could have caused this.
No, I did not edit it. There are a few things that can be different.
1. Upscaling used for hires-fix, I use SwinIR_4x with "Upscale latent space image when doing hires. fix".
2. I use xformers - a small performance improvement that might change the results, not must to have.
3. Also webui just got updated with some changes related to color correction, if you updated to the latest version, possibly the results be altered.
4. Another thing, that someone pointed out, is that hardware differences may influence small changes, a bunch of people tested the same prompt with the same setting, and the results weren't identical.
5. Also I have seen my own system that when running as part of a batch or not, may change a little bit the results.
6. And I suspect there are hidden variables inside modules we can't change that produce slightly different results sometimes.
you did everything in txt2img? or came to use img2img? how many steps u use in "Upscale latent space image when doing hires. fix"??
100% txt2img.
Show the result that you are getting, maybe I can figure out what's wrong.
https://imgur.com/gHT2J0z - the result that should be the same as the original
PS: everything mine is in the latest version
Ok, it looks very similar with a very subtle change. As I said in the previous message, there might be changes between systems, and even on the same system over time, that would generate a slightly different result.
Look at this image: https://files.catbox.moe/yzzkvb.png
It was generated with the same settings, but I turned on the "Extra" near the seed, and Variation Seed was set to 0.01. So the change will be super small with each generation. Generate as many images as you want this way until you find the result that you like. Each image has Variation Seed, so you can always reproduce it.
I just don't understand why the arms have changed so much from the original, I'm trying to figure it out. I understand that image you sent me now, because the shape hasn't changed, like the size of the bust and arms.
Small changes happen because the system is not perfectly deterministic.
I hope that by using Variation Seeds you'll find something that you like even better.
see: https://i.imgur.com/ePIR0Tu.png ??
with a different seed, I still got the "same" result as the image you sent now, but I can never get the same as the original. another thing that looks like yours generates more particles (because of "Upscale latent space image when doing hires. fix") and details (???).
I think these details are from the video card, because when I generate them on Google Colab instead of my PC, these details are different.
To be honest, I don't see much of a difference between any of them. I'm not sure why you think that any of them is significantly better than the other. But I wish you the best with your search.
is it because everything is identical, even the particles around the image, the teeth too, but why are only the arms and chest different? I want to find out because it can help me overhaul a lot of my image. https://i.imgur.com/gHT2J0z.png
Maybe you should ask on Reddit, or in forums. There are many talented people in the community that probably can help you.
I don't use embeddings, not sure what are you talking about.
fuck thast awesome but If it is not a portrait drawing, eyes and face are distorted
The model is not super realistic, it has its own base style, that you can influence by adjusting trigger words and weights. Also, there is a huge variation between prompts and seeds. And if you don't use hires-fix, and only use low resolutions, it is much more distorted.
I would encourage you to experiment with it and share if you find something interesting.
Hi,
I love the model and i tried to merge the models by myself but it didnt workout well. If you remember or have the merge weights and the order can you please tell us ?
I've tried to improve SamDoesBimbo, so it's the core component. If I'm not mistaken the formula should be:
(SamDoesBimbo +w0.3 (((Analog + Dreamlike - 1.5) + Redshift - 1.5) +w0.2 (ThePitBimbo +w0.4 sdHeroBimbo)))
+ for Add Difference
- for the third model in Add Difference - always SD v 1.5
+w0.X for Weighed Sum, with Multiplier 0.X
The exact formula for SamDoesBimbo is pretty much lost, from what I understood from the creator. I've tried to reproduce it myself but it's pretty much impossible without some advanced analysis and reverse engineering.
thank you so much!
@alexds9 Thank you for posting the formula. I would like to play around with this too but I can't seem to find the SamDoesBimbo model anywhere. Can you point me in the direction of where I can find it? Thanks.
@aferwy
This one: https://civitai.com/models/1180/samdoessexy-blend
If you want to chat about mixing, you can find me on Discord.
@alexds9 Thanks a lot. My free time to play around with this pretty sporadic right now, but I'll drop you a line if I have some questions.
https://imgur.com/a/iYAoWQb getting weird super saturated, overexposed results, only using this model, using same vae
Hi,
If you are using the latest WebUI, currently there is a problem with hires-fix that breaks everything...
Over-saturation and overexposure can be caused by high CFG.
Also, it looks like your image was produced with much fewer steps. This is how my version of the image looks when I use 512x512 without hires-fix: files.catbox.moe/juzfg0.png.
You have shared your image with imgur and it removes PNG info, so I can't really know what exactly you are doing. You can use catbox.moe and share the image link here, then I'll be able to see the PNG info, and maybe understand the problem. But generally, you should have all the settings set to values from PNG info as a whole, use "Copy Generation Data", not just part of it.
I've already mentioned in another comment what may cause differences between examples and images you are getting, so maybe it can be useful:
1. Upscaling used for hires-fix, I use SwinIR_4x with "Upescale latent space image when doing hires. fix".
2. I use xformers - a small performance improvement that might change the results, not must to have.
3. Also webui just got updated with some changes related to color correction, if you updated to the latest version, possibly the results be altered.
4. Another thing, that someone pointed out, is that hardware differences may influence small changes, a bunch of people tested the same prompt with the same setting, and the results weren't identical.
5. Also I have seen my own system that when running as part of a batch or not, may change a little bit the results.
6. And I suspect there are hidden variables inside modules we can't change that produce slightly different results sometimes.
Thanks so much for the reply! Did a generation again, with 40 steps, but the oversaturation is still there, checked on fresh version of Automatic 1111, same results, upscaling probably would help the results but I have good clean results on the other models, so Im wondering what could be going on here
https://files.catbox.moe/ytrajz.png
It is especially visible on a images where faces are small
https://files.catbox.moe/kbzhpz.png
here is some random image from the other model without any oversaturation and over exposure
https://files.catbox.moe/ubpta6.png
Hi spewor,
Sorry for the late response, the site did not push notification regarding your comment.
The contrast and lighting of the scene can be influenced by many factors:
1. I've heard a few times that high CFG can cause colors to burn. Also, there are now absolute good values. If the prompt gets complicated with much emphasis, the CFG should be adjusted down. It's not an exact science, just a rule of thumb. Keep it in mind and experiment with it. Maybe you can remove unnecessary emphasis. You can try lower CFG even to an area of 3 and see if it helps.
2. The lighting/mood of the scene - certain words in the prompt may be correlated to certain lighting and contrast conditions. Obviously, it's practically impossible to predict. So we can try to compensate for it in prompt, in positive try to remove things that may cause it like "vivid colors", add "soft light, washed colors,". I'm not sure if really helps, but maybe, little tweaks can improve the situation.
Here is what I got from implementing my own suggestions, I think it got better a little bit, check the png info, small: https://anonfiles.com/LcMcA8Q4ya
I'm getting the same super saturated images. I think it might be something external, like the vae, but haven't been able to find what
@perezbalen179
Hi! Have you looked at my recommendations to improve the results here and in the description of the model, specifically regarding CFG and VAE?
You can visit Babes Art Discord server help channel.
@alexds9 It was that I didn't remane the babes_babes11.vae.pt . Doing this seems to fix it for me.
nice. merges well with moistmix and similar.
Fantastic work. What is the best way to keep the mouth closed so its not showing teeth?
I think that if you want a closed mouth, it should be in positive: "close mouth, closed_mouth", and in the negative: "open_mouth, open mouth, teeth". You can play with similar ideas and try use Danbooru tags.
can you post the pickle version?
I added a ckpt version, you can choose it in the download button.
may i ask why? is pickle better?
@among_us
Both ckpt and safetensors are containing the data of the model, but ckpt file may contain scripts that can be malicious, safetensors can't contain such scripts so it is a safer format to use. But some Stable Diffusion clients only support ckpt format, so some people have to use them.
If you get ckpt file from a source you don't trust and you use AUTOMATIC1111/stable-diffusion-webui, you should convert it to safetensors format. Make sure not to load the model, go to Checkpoint Merger, and select the ckpt that you want to convert as A and B model. Set the multiplier to 0. Select Weighted sum. Checkpoint format: safetensors. Now you can Merge. Delete the ckpt file and use safetensors.
I love that you change your cover image almost daily!
What's the license here? Do I have to follow Dreamlike's awful license?
My license appears on this page, I haven't made any limitations on use, so do as you wish. If you notice, my model is only inspired and influenced by other models.
I can't comment on the licenses of other models, if they are good or bad. If anyone can enforce them at all. Either you should ignore them because you can mix whatever you want and nobody would ever know, and it's probably will become a transitive work anyway. So I would not comment on this subject. You should do whatever you think is the right thing for you to do.
Good day!
@alexds9 Hello , i think it's because he see no icones informations like other model that why and thank for the explanation of your licenses !!
@KingSize_RedPillow
Icons are restrictions, I didn't add any restrictions on any use.
I don't believe any of those licenses are enforceable anyway or should be enforced.
If you want to make money from a model, it is very unwise to make it available to download. The moment you release the model, nobody can control how it will be used, even if you add some meaningless text and call it "license".
@alexds9 Oh ok i didn't thinked about that on icones you right in those things and it's like a patchwork of some styles when i see the few model shared , all model took on other model or add new things to upgrade or doing other thinks and other style soo
Sorry if i removes my 2 last image generation i'm going to fix the problem of my prompt thank @alexds9 with your directives of the solution ;).
I understand now all model has a different prompt structure in general case and parameter too
Question : what the difference with midjourney style and no midjourney style ?
I'm not sure.
@alexds9 I tried but i didn't see lot of difference maybe it's the fantasy implementation ?
Maybe, tell me if you find out...
@alexds9 if i find.. ^^
@KingSize_RedPillow
Yes, it is part of the base model. But I don't know how it influences the results when you use it in the prompt.
@alexds9 hum i didn't know too
I ran a little test for multiple styles. https://i.imgur.com/XLlJU3c.jpg - each style will affect almost everything from clothing to art style.
@Burns Thank you ;) for your share and your time :)
I got a beautiful image but the eyes were messed up. how can I fix them ?
I tried to use inpainting but it doesn't seem to work. any advice ?
I transferred the image into "inpaint" rigt after I generated it and I put the masked content setting to "fill"
I haven't used inpainting long before the big last change, so I can't help you with that. Usually, most of the problems with eyes solved if you don't use face restoration, and you do use hires-fix, with a target resolution set to 960x960 or more. But it's hard to tell what the actual problem without seeing it. If you want, use: catbox.moe, to share your image. With the png info from the image, I'll probably be able to understand the problem better.
This site can fix eyes
@alexds9
thanks for your offer to help!
And I don't really know how to use hires-fix. I'll look it up in the guide and see if it results in something. as for the image, I used carbox you can see it below as well as the parameters
https://files.catbox.moe/7ffwk4.png
extremely detailed, beautiful, high detail, intricate, sharp focus, woman, detailed pupils, pale skin, huge cleavage, happy, sexy, glossy, dreamlike art, style, blue eyes, blue hair, flowing hair, hard nipples, unbuttoned shirt, analog style, ducklips, full lips, pressing boobs
Negative prompt deformed eyes, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), ((morbid)), ((mutilated)), extra fingers, mutated hands, poorly drawn eyes, ((poorly drawn hands)), missing limbs, missing arms, ((torso out of frame))
Steps 50, Sampler DPM++ 2M Karras, CFG scale 7, Seed 4213853404, Size 768x1024, Model hash a8273a26, Denoising strength 0.75, Mask blur 4
@Shero damn this is not straightforward (for a non-python-savvy). And I'm not sure if it's able to solve double pupils... i might give it a go as a last resort. thanks for sharing it!
@Tazeps
Have you seen my recommendations to improve the results, in the description of the model?
From the image, it looks like maybe you are missing VAE, possible?
Also, you are using a resolution above 768 without hires-fix, and it can produce duplicates and deformed images sometimes.
I'm not sure what exactly is causing this problem with pupils. When I tried your settings I got, a slightly different image. Maybe try to remove the color of the eyes, and describe them less, it is kinda overdoing them, this is what I got. Perhaps, it is something related to reflections, and quality description of the image, not sure.
@alexds9 i did download the VAE but it didn't show up in the list whne I wanted to select it so I had to do without it.
but actually I just redid the pic with the same parameters and sometimes it came out just fine :D I didn't think re-rendering it can fix such artifacts
I'll make note of your comments though! thanks!!
Then I take it the recommended res is 960x960
@Tazeps
VAE is very important, it improves the results significantly, including eyes. I would advise you to check that you downloaded VAE into correct folder, and selected it in the settings.
defs the VAE
@Shero Thank you so much for posting this, I'd never heard of it and struggled so much trying to fix eyes in Photoshop. This tool is AMAZING, I've run about 300 images through it already and it has fixed about 95% of the funky eyes and even made other slight enhancements. Now my AI Art passion has been reignited!
@VoiceLikeCandy Glad to help
It says it was updated jan 27 but doesn't say what the update is. Do I need it?
I only updated the preview image.
This checkpoint lacks in range, but what it does it does fantastically.



















