Read "About this version" to see what changes were made to the model. I might make changes you don't like and you may want to stay on the older version.
The only authorized generation service outside Civitai is yodayo.com
Maintaining a stable diffusion model is very resource-burning. Please consider supporting me via Ko-fi.
AingDiffusion will ALWAYS BE FREE.
EXP models will be updated here to reduce confusion: https://civarchive.com/models/52780.
===
AingDiffusion (read: Ah-eeng Diffusion) is a merge of a bunch of anime models. This model is capable of generating high-quality anime images.
The word "aing" came from informal Sundanese; it means "I" or "My". The name represents that this model produces images relevant to my taste.
Guide to generating good images with this model
(NOT REQUIRED SINCE v7.7. FOR AINGDIFFUSION v7.7 AND UP, SET THE VAE TO NONE)
Use the VAE I included with the model. To set up VAE,you can refer to this guide.Use any negative textual inversion (e.g. EasyNegative), they will help you a lot.
Recommended samplers are "Euler a and DPM++ 2M Karras" for AingDiffusion v7.1 and up.
Hi-res fix is a must if you want to generate high-quality and high-resolution images. For the upscaler, I highly recommend 4x-UltraMix Balanced or 4x-AnimeSharp.
Set Clip skip to 2 [optional, if you need more creativity to the output and not following the prompt 100%], ENSD (eta noise seed delta) to 31337 [to replicate image], and eta (noise multiplier) for ancestral samplers to 0.667.
FAQ
Q: What's up with the frequent updates?
A: AingDiffusion is a model I use daily, not something I merge to gain popularity or for the sake of download count. I made constant efforts to improve the model whenever possible and wanted to share the improvements as quickly as possible.
Q: I can't generate good images with your model.
A: The first thing to remember is that every little change matters in the world of Stable Diffusion. Try adjusting your prompt, using different sampling methods, adding or reducing sampling steps, or adjusting the CFG scale.

Keep experimenting and have fun with the models! 😄
Description
Several fixes, including hands.
Successfully merged LyCORIS into the model to fix hand-related problems. The model is less likely to generate bad hands. not a guarantee, but you'll feel the difference.
Also readjusted some merged LoRA weight because v7.1 felt a little overtrained.
Thank you for using AingDiffusion.
[July 13th, 09:03 AM GMT+7]
FAQ
Comments (9)
Hello, my image suddenly turned gray after upgrading,What is the reason?,There was still no problem upgrading before (is it the reason for the prompt word?Or Lora?)
According to your settings, 31337,0.667, vae is ClearVAE_ V2.3,
nsfw,masterpiece,best quality,ultra detailed,soft lighting,cyberpunk,Vaporwave,cowboy shot,1girl,solo,exhibitionism,mecha musume,cat ears,black hair,long hair,blunt bangs,hair over eyes,straight hair,payot,red eyes,expressionless,blush,navel,nude,robot joints,mechanical parts,curvy,wide hips,thick thighs,gigantic breasts,pussy,vagina,standing,arms behind back,outdoor,skyline,night,glow,<lora:more_details:0.2>,<lora:AMechaSSS[color_theme,mecha musume, mechanical parts,robot joints,headgear]:1>
EasyNegativeV2,bad-hands-5,(worst quality,low quality:1.4),(lip,nose,tooth,rouge,lipstick,eyeshadow:1),(dusty sunbeams:1),(bad anatomy),(inaccurate limb:1.2),bad composition,inaccurate eyes,(depth of field,bokeh,blurry:1.4),bad hands,extra fingers,fewer fingers,extra digit,fewer digits,(greyscale,monochrome:1),text,title,logo,signature,bar censor,censored,tail,ears,Wood,stone,
Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 796655788, Size: 512x768, Model hash: 4a07c5c931, Model: models_aingdiffusion_v75, Denoising strength: 0.4, Clip skip: 2, Token merging ratio: 0.5, Token merging ratio hr: 0.5, ADetailer model: face_yolov8n.pt, ADetailer prompt: "masterpiece,best quality,ultra detailed,1girl,solo,black hair,long hair,blunt bangs,hair over eyes,straight hair,payot,red eyes,blush,shy,moaning,", ADetailer negative prompt: "EasyNegativeV2,(worst quality,low quality:1.4),(lip,nose,tooth,rouge,lipstick,eyeshadow:1),(dusty sunbeams:1),(bad anatomy),(inaccurate limb:1.2),bad composition,inaccurate eyes,(depth of field,bokeh,blurry:1.4),(greyscale,monochrome:1),text,title,logo,signature,", ADetailer confidence: 0.3, ADetailer dilate/erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 23.7.5, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, Lora hashes: "more_details: 3b8aa1d351ef, AMechaSSS[color_thememecha musume mechanical partsrobot jointsheadgear]: 8120c60161ec", Eta: 0.667
I'm out of town now and cannot check, but I suspect it's the lineart LoRA I merged the model with acting up in some seeds.
Try to add "lineart" to your negative prompt or just, change the seed.
Have you also try to remove the "more_detail" LoRA?
Or maybe you can send the sample output somewhere?
Edit: I have access to my PC remotely and just checked. I don't see any problem with the generation. It also maybe because I don't use ADetailer.
@kayfahaarukku Although I don't know why... it's okay now,Thank you very much
I'm sorry for 2.0k people liking AingDiffusion. I accidentally unpublish the model page (._.)
Thank you for taking the time and publishing your work here! I love the model, it's become my #1 go-to.
Big oof moments.
LOL I wondered why I got so many notifications

