Read "About this version" to see what changes were made to the model. I might make changes you don't like and you may want to stay on the older version.
The only authorized generation service outside Civitai is yodayo.com
Maintaining a stable diffusion model is very resource-burning. Please consider supporting me via Ko-fi.
AingDiffusion will ALWAYS BE FREE.
EXP models will be updated here to reduce confusion: https://civarchive.com/models/52780.
===
AingDiffusion (read: Ah-eeng Diffusion) is a merge of a bunch of anime models. This model is capable of generating high-quality anime images.
The word "aing" came from informal Sundanese; it means "I" or "My". The name represents that this model produces images relevant to my taste.
Guide to generating good images with this model
(NOT REQUIRED SINCE v7.7. FOR AINGDIFFUSION v7.7 AND UP, SET THE VAE TO NONE)
Use the VAE I included with the model. To set up VAE,you can refer to this guide.Use any negative textual inversion (e.g. EasyNegative), they will help you a lot.
Recommended samplers are "Euler a and DPM++ 2M Karras" for AingDiffusion v7.1 and up.
Hi-res fix is a must if you want to generate high-quality and high-resolution images. For the upscaler, I highly recommend 4x-UltraMix Balanced or 4x-AnimeSharp.
Set Clip skip to 2 [optional, if you need more creativity to the output and not following the prompt 100%], ENSD (eta noise seed delta) to 31337 [to replicate image], and eta (noise multiplier) for ancestral samplers to 0.667.
FAQ
Q: What's up with the frequent updates?
A: AingDiffusion is a model I use daily, not something I merge to gain popularity or for the sake of download count. I made constant efforts to improve the model whenever possible and wanted to share the improvements as quickly as possible.
Q: I can't generate good images with your model.
A: The first thing to remember is that every little change matters in the world of Stable Diffusion. Try adjusting your prompt, using different sampling methods, adding or reducing sampling steps, or adjusting the CFG scale.

Keep experimenting and have fun with the models! đ
Description
Changelog:
AingDiffusion v8.17 a.k.a Indonesia Independence Day Special Update.
Minor enhancement to background and high key fix.
Dropping LoCon I used to merge AingDiffusion with. Sometimes it works, sometimes don't and only breaking seeds. If you want to use it, here's the link to EnvyBetterHands.
===
Personal note:
I moved all my workflow to Nobara Linux which provide a very little improvements from my Windows installation (from ~4.5it/s to ~5.0it/s with Euler a on 512x768 resolution).
Installing Fedora-based OS is a pain in the ass. My Ethernet port is even not working.
FAQ
Comments (10)
How do I use adetailer extensions for fixing hands? Like what tags should I use in the negative prompt and positive prompt in adetailer tab.
I don't use ADetailer, sorry.
I personally just leave the prompts blank, but adjust the Inpaint denoising strength in the ADetailer inpaint tab. I guess there might be a better way to use it but I haven't figured it out yet.
@NatanS8Â What should I adjust in the inpainting tab? Everytime I enable hand model in the adetailer, the hand automatically turns into face for some reason. And sometimes, it absolutely does nothing. It only works with faces for me.
@forgerloid If it turns hands to faces, this is more of a detection issue than inpainting I think. Make sure you use the right Adetailer model
@NatanS8Â I was able to fix it. Apparently we're supposed to enable hand model as the first detection model and then the face model as the second detection model. This will make sure to avoid inpainting already detected parts in the picture.
mang ieu make sampler naon nu alus
tergantung eta mah, tapi urang mah make na DPM++ SDE Karras atawa Euler a/Euler.
Deceptively good on what it can actually do for you. I love this model.
do you have any recommended parameters when using the Dynamic Thresholding (DT) extension with this model?


