MeinaMix objective is to be able to do good art with little prompting.
I have a discord where you can share images, discuss prompt and ask for help. https://discord.gg/meinaverse
我有个可以让你分享图片和参与讨论与询问问题的discord群。
I also have a ko-fi and Patreon page where you can support me or buy me a coffee <3 , it will be very much appreciated:
https://ko-fi.com/meina and https://www.patreon.com/MeinaMix
MeinaMix is officially hosted for online generation in:
- SeaArt
- Mage.space ( with animate feature )
MeinaMix and the other of Meinas will ALWAYS be FREE.
Cover image lora made by: FallenIncursio | Civitai
Recommendations of use:
--------------------------------------------------------------------------------
Enable Quantization in K samplers.
Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes!
---------------------------------------------
Recommended parameters:
Sampler: DPM++ SDE Karras: 20 to 30 steps.
Sampler: DPM++ 2M Karras: 20 to 60 steps.
Sampler: Euler a: 40 to 60 steps.
CFG Scale: 4 to 9.
Resolutions: 512x768, 512x1024 for Portrait!
Resolutions: 768x512, 1024x512, 1536x512 for Landscape!
Hires.fix: R-ESRGAN 4x+Anime6b, with 10 steps at 0.3 up to 0.6 denoising.
Clip Skip: 2.
Negatives: ' (worst quality, low quality), (zombie, interlocked fingers) '
Negatives if you can't use Hires.fix:
'(worst quality:1.6, low quality:1.6), (zombie, sketch, interlocked fingers, comic)'
--------------------------------------------------------------------------------
In the merged models list: MeinaMix V1~11, MeinaPastel V3~6, MeinaHentai V2~5, Night Sky YOZORA Style Model, PastelMix, Facebomb, MeinaAlterV3 i do not have the exact recipe because i did multiple mixings using block weighted merges with multiple settings and kept the better version of each merge.
Description
Inpainting version of V11!
DO NOT USE IT FOR TXT2IMG :3
FAQ
Comments (82)
Thanks for V11 and the inpainting model is awesome. Finally! :D
Thank you for using my models!! (ノ◕ヮ◕)ノ*:・゚✧
Thanks for the inpainting version!
yw! ヾ(≧▽≦*)o
Thanks for work! This model my favorite now <3
What is the reason why my V11 picture is grey
Tutorial for MeinaMix easy and Fast!!
How to avoid image having really saturated colours?
Hmmm, you can try using other VAE in your stable diffusion, like this one: https://civitai.com/models/76118/vae-ft-mse-840000-ema-pruned
你好,我画出来的图脸部总是非常奇怪,仿佛是真人的五官以一种非常生硬的方式填充上去。(我在discord求助频道发了例图),如若能指出问题在哪,将不胜感激!
试试插件adetailer,修脸神器,内置的面部修复不要用了
is there any way to not have the women look like 7 years old ?
Thats probably your prompt or lora you're using :p
Try putting stuff like "loli, child" in the negative prompt and "adult" in the positive prompts. But yeah you should also check if it's not your lora creating images of children
@Meina @CivitO_O i tried without any loras and had old in positive prompt and child, loli in negative but it still happens :/
What kind of prompts are you using? I've found that, for certain models, certain trigger words tend to be correlated with the age. For instance, triggers like "small breasts" or "huge breasts" can sometimes have an effect on the apparent age of the subject.
@mangaka92 ah that would make sense, i use small breasts and skinny usually
@sneksnek24 Yup, that would probably do it.
@sneksnek24 Having "small breasts" will always end up generating REALLY young girls, avoid using that if you don't like it as the default "cup size" is already pretty small to my taste... lol
what vae should we use for this model or does it include its own baked in vae?
it does include a vae, its the kl-f8-anime2
@Meina Sorry, still a bit of a novice. So, does that mean I should not use a VAE with the Meina models since the VAE is already part of the model?
@psmallwood217549 I tested with and without VAE. The result is the same, which means that when using this model, its use is not required
Do you have any plans of making a SDXL model at some point?
Yes! With the help of my community i got money to buy a new GPU so i can train SDXL models, i'll start it as soon as the new GPU arrives!! ヾ(≧▽≦*)o
This is one of my favourite anime model, are you considering doing a SDXL Model ? I would contribute in any way if you do ! Keep it up !
Thank you!! ヾ(≧▽≦*)o
I'm gonna be working in a SDXL model, i've been saving for a new GPU since months ago so i can improve my models and train in the newer versions of SD. (✿◡‿◡)
If you don't have good GPU, you can run Meinamix 11 with Google Colab (for free and without limited token). There's a useful guide here Youtube.
Sorry for the probably dumb question but, where should I drop this?. So there's the models folder right, but in which folder should I put this model (I heard putting the model in the correct folder makes it works correctly)
no question is dumb! if you still need help with it, the correct folder is in: SD webui main folder -> models -> stable diffusion .( for checkpoints )
@Meina Thank youuuuuuu, I finally figured it out, love your model, great all around!!!
What is the best way to upscale in img2img? I dont want to use hires fix cuz I want to make like 50 renders then choose one to upgrade this one render with img2img. Is there the same option like hires fix but in img2img?
you can give a chance to the Ultimate Stable Diffusion Upscaler, its a extension for img2img and its slightly better than hires
Yes, there is! Hires. fix is essentially just img2img, with 2 differences: 1. You can choose your upscaler, 2. The steps are fixed and not multiplied by the denoising strength like img2img does. Both of these can also be done in img2img. In Settings > img2img you will find "Upscaler for img2img" and "With img2img, do exactly the amount of steps the slider specifies". Activating the fixed steps and choosing the same upscaler as hires fix will make img2img behave exactly the same as hires fix. If you want to get the exact same image from img2img that hires fix does, just use the same seed in img2img that you used in txt2img and use Euler A, as that's what hires fix does
What VAE are you using? Thanks in advance!
is this just for females? im trying to generate males but no luck.
Sadly, a lot of models are trained exclusively on attractive asian females, meaning they struggle to generate anything else.
I agree with the comment above me but I can manage to make a femboy just fine, probably need to add 1girl and other girly features into the negative prompt if you want the MAN part
I'm wondering how best to create proms, in the form of a description, as a request, for example, "draw me this so that it would be like this and so on" or in the form of listing what I want to see, for example, "a girl, alone, holding a glass, a living room." Any ideas?
I think there's a extension for that somewhere but I forgot the name sorry
Generally, you want to specify the style of image, pose, focal subject, details about the focal subject, setting, background details including background characters, in roughly that order. Then put parenthesis around the nouns and adjectives that pertain to the focal subject.
e.g. "shoujo anime splash art viewed from above of a (seated woman), with (sparkling eyes, lipstick, huge boobs), wearing a (white sundress), (holding a teacup), (sitting) at a table in a rose garden, hedge mazes and butlers in the background"
You can get ideas for subjects and details you like by reading the prompts from other people's images that you like, until you get the hang of it.
If you are just getting started, the best thing to do is to browse the pictures generated by others here, and if you like one, check out the prompt they used by clicking on the circled "i" in the lower right corner.
Wich Version is the best ? Cause is hard too decide !
u can use last
@tkachevdanila6899 TY
This is the best checkpoint so far for me.
is this model have baked vae?
Hi, love your checkpoint but kinda confuse of the difference between MeinaMix, Hentai, Alter
Does MeinaMix can do all your checkpoint like a merge or no ?
Great anime model!
If the Vae is baked i have to choose "None" or "Automatic" ?? Thanks
If you know what baked vae means, it means it's already in and it's already answered way back which is using automatic
"None" parameter will use baked-in VAE if there it is.
"Automatic" parameter will use VAE with the same name as model's name.
Even if VAE is baked in model, you can use any VAE you want.
@Noxy_ Thanks
@TroubleDarkness Thanks
Thanks for your fantastic work! I use a lot this model for different kind of characters and its flexibility is outstanding.
For several hours now, lora has been ignored when used in civitai.
me too
too >:(
Aaaand yep, the LoRa is being ignored by the model
This model always ignored the LoRa
Seems to be working correctly now.
I experimented with a lot of things.
Apparently some Lora still does not work well.
how to generate perfect fingers as your pic
This right here, at least tips or things to do rather than leaving us hanging
@Noxy_ I'm late to this, but have you guys tried using Adetailer to fix hands?
@Maxx_ It doesn't fix most of the results and sometimes it doesn't work at ALL, so it's a random dice roll now
Although, using that doesn't work for me anymore
@Noxy_ It is always a roll of the dice, but typically, (bad_hand) (deformed_hand) (ugly_hand) can help, sometimes just one, sometimes all three, sometimes with one or more of these getting 1.2-1.5 weight, it varies by checkpoint and seed.
你好,谈模型合作如何联系你?
I'm addicted to this checkpoint. I love all the results it gives me. Also didnt experience any issues with any LoRAs yet, keep up the amazing work!
Hello, sorry for the inconvenience, I am new to this and I would like to know what is the difference between Meina v11 and v11-CKPT?
Is it possible to get the same quality on mobile civitai?
Yes, doesn't matter. better if use xl models. Check my profile
I used it to produce MISAKA, it was quite stable and my face didn’t collapse! Thank you.
Uwu
Uwu
Definitely light weight! good job

