Alternative use of ClipSkip 1 or 2
While this model may seem fine to some, it may be an unpleasant trough to some. The solution is to write at the prompt (Realistic:0.1~1.4) or(realistic:0.1~1) in the negative prompt.
Default prompt: best quality, masterpiece
default negative prompt: (low quality, worst quality:1.4)
Recommended: Sampler, eluer a, DPM++SDE Karras. Step 20, scale 6,(Modified. You can use a high scale and step. I'm not good at drawing pictures).
Apply VAE. kl-f8-anime2 or vae-ft-mse-840000-ema-pruned
clip skip 2
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt https://huggingface.co/AIARTCHAN/aichan_blend/tree/main/vae Apply VAE. You will get better color results.
hires fix denoise 0.5, upscale by 2. Latent, R-ESRGAN 4×+ Anime6B.
If you don't upscale the hires fix you may not get the results you expect.
This model seems to have better hand batting average than other models, but it is my personal opinion. I want you to test it yourself. I don't recommend hitting the prompt too hard. Even if you don't use the hand prompt, it comes out okay 3 times out of 10 times.
The closer the person is, the more detailed it is. Upper body, cowboy shot, these prompts are also recommended.
I also checked the operation of Colab. It works very well..
other models
https://civarchive.com/models/6437/anidosmix
https://civarchive.com/models/8437/ddosmix
https://civarchive.com/models/6925/realdosmix
Description
v1
FAQ
Comments (56)
Bro , amazing could you train these kind of models ?
I don't know what that means, but do whatever you want
If you asked if I could make this kind of thing again, I'm working on another model.
I used kl-f8anime2, but vae-ft-mse-840000-ema-pruned seems to be fine too.
thanksyou very much
A technical doubt: is the hiresfix already included in the last version of Automatic1111 SD webgui ? And the upscalers mentioned in the description of this really good model/mix ?
I can't exactly explain either. There are other people including me.. Try the latest update or webui extension
OK, tnank you anyway for your reply.
Would you please how can I add a VAE?
I'm using collab, and I'm very new to doing it..
Go to Google Drive and put Vae in SD/Stable Diffusion/models/VAE here.
Hello, this model looks awesome. Sorry for the question. Does model need VAE?
of course. For semi-realism, vae-ft-mse-840000-ema-pruned is good, and for anime, kl-f8-anime2 is good.
Many thanks 🙏
Hello, this model looks good. Sorry for asking question. My picture looks so blurry, how can i make the pictures as clear as yours?
Apply VAE. kl f8 anime2 or vae-ft-mse-840000-ema-pruned.
@DiaryOfSta where to?
@Anying hugging face
All of my attempts to use this model result in washed-out images, dull colors, and artifacts. This happens when I try to replicate posted images or generate my own. Any thoughts as to the cause?
Apply VAE. kl-f8-anime2 or vae-ft-mse-840000-ema-pruned
是要配合VAE一起使用的吗,想请问下在哪下载这个VAE. kl-f8-anime2 和 vae-ft-mse-840000-ema-pruned
建议下后面那个,效果很棒,基本任何模型都能用
为什么这个模型图输出多了,会越画越幼女,另外在仰视角镜头下,角色都会张大嘴巴,好奇怪
我都用不了,没有CKPT文件哎
@Dada233333 下载这个模型把它甩进models\Stable-diffusion里面就行了。
我木有PC,这里是慢吞吞的苹果,哎。。。
应该跟训练时喂的图有关系
很可爱的风格,但是为什么很多时候角色的皮肤上会出现一坨一坨的紫色的斑纹,这个问题导致很多图片会有比较大的瑕疵
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt https://huggingface.co/AIARTCHAN/aichan_blend/tree/main/vae Apply VAE. You will get better color results.
@DiaryOfSta Thank you! I'll give it a try.
same here
why many peopel use SD1.5 instead of 2.0?
This is because the model training is all set to 1.5
Because 2.0 is a piece of shit censured and so it's more hard to train it.
1.5 > 2.x anyday.
1.5封鎖了NSFW 所以是垃圾
It is widely agreed that 2.0 is not a very good version of stable diffusion. 1.5 tends to be much more stable and generates better.
生成的眼睛很奇怪,总是一个眼睛有妆,一个眼睛没有,有大佬知道什么原因吗?急,在线等!
It seems that the picture is broken because the person is far away. Prompts to specify a close-up of a person: use upper body or detailed face, etc. If you want to make a full body, you need to upscaling to make it higher quality so that the figure comes out well. If it's a color problem, try applying VAE.
@DiaryOfSta Thank you very much, the problem has been solved. This model is amazing!
咋解决的,老哥
This is my favorite commercial art style model! It's an Asian aesthetic, thank you so much for your work!!! The img2img model works great!!!
Hi... I saw your Cartoonish model... Can you reupload it?
ok
This is a great model with amazing potential. The faces I get, however, are not good at all. The features are warped and eyes are weirdly outlined with red lines. I’m using the suggested settings and VAE. Is it because I’m running the model on my iPad with the DrawThings app?
Again. Thanks for the wonderful work and I appreciate any help. I’m fairly new to AI and unsophisticated with this stuff.
It's probably a phenomenon caused by the character being far away, so either upscale it to make it higher resolution, or use something like an upper body to get the character closer.
@DiaryOfSta Thanks! I will try that and see if it works.
try ddetailer (or dddetailer, if you using torch 2.0) with mmdet_anime-face_yolov3.pth
@loathe What is this, an extension?
Got some pretty good results right off the bat. Been tinkering and it does a lot better with LORAs that dont work well with realistic models or just combinations of LORAs that didn't mesh together well so far. Even got a lot of great images without any upscaling, though upscaling definitely makes everything better. I'll do some more messing around later with other LORAs and such and share some results.
Hello DOS, your model is really amazing, but I have one question, was there an unpruned version maybe available at some point? I found some images which were generated with DosMix but with a different hash: 3cf9e337ad , or was this an older build? I'd love to know if there are some other versions out there and a full unpruned would be great to have if possible at all.
Hi developer , is [Semi-Anime_dosmix_unbaked] the same with dosmix?
请问这个模型还会再更新吗?
Details
Files
dosmix_.safetensors
Mirrors
dosmix_.safetensors
dosmix_.safetensors
6250_dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dsmx.safetensors
dosmix_.safetensors
dosmix_.safetensors
dsmx.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_-fp16-no-ema.safetensors
dosmix.safetensors
dosmix_.safetensors
dosmix_.safetensors
dosmix_.safetensors



