Huggingface repository goes here.
Interested in supporting me? Buy me a coffee.
A stylized anime model. This model is made with the thought of imitating pastel-like art while introducing the potential of merging LORAs into a model altogether to create a fantastic mix.
Guide
For the settings or parameters, I recommend using these settings.
Sampler: DPM++ 2M Karras
Steps: 20
CFG Scale: 7
Hires. Fix: On
Upscaler: Latent (MUST!)
Hires Steps: 20
Denoising Strength: 0.
I prefer using 0.6 since it's the sweet spot of this model. If you can find a better setting for this model, then good for you lol.
Latent upscaler is the best setting for me since it retains or enhances the pastel style. Other upscalers like Lanczos or Anime6B tends to smoothen them out, removing the pastel-like brushwork.
Please use the VAE that I uploaded in this repository. It is from the Waifu Diffusion team. Credits to haru for letting me rename and upload it.
Tip (Optional)
Putting mksks style in the beginning of the prompt can further influence the pastel-like style and make the output better. It is optional though, so it's up to you. You don't really need it.
Recipe for the mix can be found inside the HF repository.
Description
Extracted from pastel-mix using a method similar to add-difference.
FAQ
Comments (106)
Please how to get rid of people? Negative prompt doesn't work.
Hyperlink to the hugging face repository should be included in the description.
How do people get such vivid colors? When I try this model, I get beautiful images but they are fairly dull, mostly matte colors and greyscale. What am I missing? I use prompts like colorful, vivid, etc already
It's a VAE issue. Use the WD VAE like described in the description. (The updated SD 1.5 VAE, and other anime fine-tuned VAEs may also give good results.)
It's because you're not using VAE. You can download it from the dropdown on the right side of the download button.
additionally try borrowing someone's image prompts
i can't seem to get the VAE to show up on my models list. i've downloaded it and put it in the correct folder, but it still doesn't show up. any help on this would be greatly appreciated!!
Do you have the VAE in the same folder as the model? It should be in the VAE folder, not in the stable-diffusion folder. You can select what VAE to use in settings/stable-diffusion (assuming you're using Automatic1111)
@theVoidWatches i figured it out!! just forgot to update lol. thank you
Hi, I am very interested in making merge block weights models such as abyssorangemix, pastelmix, and counterfeit v2.5, etc, is there a discord group to discuss stuff like that? thanks a lot!
A little late, but you can find people that will help at the Unstable Diffusion discord: https://discord.gg/unstablediffusion
Is it just me or everytime I conform discord invitation I got beep boop screen then instantly got banned. I've tired with so many accounts with instaban result.
Pro tip: avoid discord.
太棒了,真好看
真的非常丝滑好用,可惜自己的练手图传了好久传不上来!!(*^_^*) 感谢太太5555好人一生发财
where do i paste the files is on models/ VAE?
Can I make genshin characters?
Yes you can, you just need a lora for the specific character.
Can you train a model with superheroes with such art styles
I already installed the VAE into the right folder, and configured all the stuff as it is said up there... but my images are still blurry and not vivid at all.
You can try setting the VAE to always be the pastel one via the settings if you are using Automatic1111, since I've had issues with the automatic pickup
rename the VAE to the same name as the checkpoint (except for the extensions). Automatic detection usually uses this simple rule.
wait so we put it all on the stablediffusion file right?
or the VAE should in the VAE folder (sorry i'm new to this) @ZeroEight
@s01106 you put the vae in the same folder as where you point the checkpoints. If you want it to be picked up automatically, you need to rename it to the name of the checkpoint you are using. Otherwise, you need to to the settings tab and search for VAE, there is a dropdown where you specify the VAE to be used (default is automatic).
I found a solution to your problem. You need to enable in the settings and in stable diffusion not automatic, but the one you want. Change the name of VAE and Chackpoint in advance, so that they are the same.
Then turn on chackpoint.
Belisimo!
How do i configure the VAE weights to the model?
I found a solution to your problem. You need to enable in the settings and in stable diffusion not automatic, but the one you want. Change the name of VAE and Chackpoint in advance, so that they are the same.
Then turn on chackpoint.
Belisimo!
So, I got kinda lost. What should I do if I want it to draw touhou characters? Is the VAE not necessary anymore? Or is it that I need some LoRA for that? Please help.
If you want to draw a character from the "touhou" genre, you need to find the character's lore first in order to draw them accurately.
I use Automatic1111 WEB SD 1.5
How do i install VAE? i tried to place it in Stable Diffusion along checkpoint and changing name to same as checkpoint and placing it in VAE folder and checking setting-stable diffusion - it doesnt show up, my pictures still with faded color
Have you found any solution to this? I have the same problem
@Mercus I didnt, i downloaded another pastel model from hugging face which includes in itself vae
Called Pastel Better Vae
I found a solution to your problem. You need to enable in the settings and in stable diffusion not automatic, but the one you want. Change the name of VAE and Chackpoint in advance, so that they are the same.
Then turn on chackpoint.
Belisimo!
Not very poggers of you to be using leaked intellectual property (NovelAI) for commercial profit.
base: NovelAI-animefull-latest.ckpt [e6e8e1fc]
dpepmkmp.safetensors [718f25b4] - 96.13%
Tea.ckpt [bcf09d36] - 95.06%
pastelmix.safetensors [4048130a] - 93.59%
sd-v1-4-full-ema.ckpt [06c50424] - 84.17%
base: pastelmix.safetensors [4048130a]
NovelAI-animefull-latest.ckpt [e6e8e1fc] - 93.59%
sd-v1-4-full-ema.ckpt [06c50424] - 86.24%
https://huggingface.co/JosephusCheung/ASimilarityCalculatior
https://huggingface.co/andite/pastel-mix#recipe
kek
"Fantasy.ai is the official and exclusive hosted AI generation platform that holds a commercial use license for Pastel-Mix, you can use their service at https://fantasy.ai
Please report any unauthorized commercial use.
Huggingface repository goes here.
Interested in supporting me? Buy me a coffee."
With regards to coffee, Fantasy.ai now is the official and exclusive platform that should buy Pastel Mix coffee.
Is there a way how to use it in Google Colab?
Right click the download button and paste it the custom url section, join here for more :)
https://discord.gg/touhouai
Brother, I have a question where to put the file
how can I find it in the stable diffusion?
webui-modles-VAE
Can you create these models please?
https://www.pixiv.net/zh/users/1566167
And
https://www.pixiv.net/en/users/23034129
I hope you make it. I love your work so much
Which Base Model Should I used with the lora version?
did you agree with this?
fantasy. ai shit
crappy model, author involved in shady business deals
well hate fantasy.ai as much as you want but wtf the model is not crappy at all
I don't understand how people are getting vivid colors.. I'm using the diffusers library and I have no idea how to set up the VAE..
U download the VAE and put it inside the Models/VAE similar that with the stable-diffusion models, then go to setting click "show all pages" ctrl+f look for "SD VAE" then select the VAE make sure to click refresh next to it if you dont see it, then MAKE SURE to go back all the way up and click "Apply settings" and you good to go, the VAE should be working now and apply better colors
so im very new to using SD and I don't understand how people are getting these crazy gens, even when using the same settings my gens don't look great.
the colours aren't vivid at all and the details are terrible, everything kinda looks like it has a weird grainy sorta thing going on (I don't even know how to describe it).
is it me? does it just take more time to get a good gen? is my machine not powerful enough?
are there maybe any guides on how to get better gens? I have the VAE and I am still not getting good colours, any help?
There's two forms of generation involved in getting the sort of stuff you're seeing (and techniques inbetween). Firstly, try simple prompts describing what you want to see in the scene using singular words separated by commas (this is typical of most anime models, most don't want to be told a story in sentences of what you want to see).
Next is resolution. Try either 768px in either length or width (or both). The higher the resolution the better (sometimes, many times it can fail since Stable Diffusion 1.5 models were mostly trained with 512x512 images), but try to keep it in multiple of 64 if you can. If you don't want to natively generate something in higher resolution, go with the HiRes multiplier while starting with the base resolution 512x512. Go for a multiplier of 1.5x, or 2x (the higher VRAM you have the higher you'll be able to push this). You need to also hunt down a download of the upscaler that'll be used (many use R-ESRGAN 4x + Anime6B). There are many other though.
Next, make sure you're using DMP++2M, or DMP++2M Karras as the sampler, this seems to be a default good choice for 2D works, but try out others to see the stylistic change. Be aware, not all generate at the same speed, or do good work at lower sample steps. If it's producing lots of artifacting, try more sample steps. The prior two I mentioned begin to show great results after 20, and really solid results by 40 steps.
Try with and without VAE loaded, usually some models have a VAR baked in since most users get washed out images when using a model that hasnt had a VAE baked into it. The problem with ones that do have baked into it, is while they do provide solid colors and line work, loading another VAE of your choice is most times going to ruin your output. So try all models with or without VAE (ones that dont say if they come with or without one baked in by the model maker, will usually be color desatured if they didn't have a VAE in it which will allow you to load yours).
There is the next step I wont mention since all this is enough for you to take to Google/YT and learn more. But that's something called inpainting. That's the final stage of seriously good looking generations.
I like it,but the hands...hmm,worse than other models
fantasy.ai? that crappy site? seriously, Andite?
fantasyai????? seriously???
Really funny as a system named fantasy sys~
I'm thinking of making some income by doing additional detail work with the pictures drawn through that model and drawing broadcast screens for YouTube or Twitch streamers. Can I use it like that way?
The additional detail here means that I'm going to paint on top of it myself
yes you can, people have been doing these for all models since SD was released, no one will know what model you're using
So I’m new to this ‘AI art’ stuff and I don’t really know what I’m doing. I downloaded the file but how do I actually use it?
Hi! Here's a guide for you!
First of all, you need to have stable diffusion downloaded! I can't really help you from here with details, I am a noob too so I know it's difficult to follow but there are plenty of guides. Here are some things you should do:
1. Download Stable diffusion (here's a guide: https://rentry.org/voldy)
2. Once you've confirmed it runs, locate the folders "embeddings", "models" (inside of which you will find) "Stable Diffusion" and "Lora" (should both be inside models).
Now, in civitai everything is marked. If something is marked as "Textual Inversion" it goes into the embeddings folder. If it is marked as checkpoint trained or checkpoint merge, it goes into the model folder and if it is marked as Lora then it goes into the Lora folder. Basically, models are what allow you to shape the images, embeddings are some things that help with the generation by setting perimeters. Some really important ones like easynegative and deepnegative help to generate very good images, i use both with nearly everything, especially anime. Lastly, Loras can help you design a certain thing, like a specific character by using the triggers provided on the civitai page.
3. Now, with this new info, place the files in their respective folders. (I also recommend downloading Automatic1111 ui, as it is very helpful)
4. Search for prompts in the models you like, find some good ones, and generate!
Please do reply to me for more info as I know it is difficult to do, being a newbie myself. Hope I could help!
Oh yeah also forgot to mention VAEs! In the models folder you'll also find a VAE folder. Most models when you download them, have a VAE they come with. You need to download that and add it into the VAE folder. You can enable them through settings in the Automatic1111 ui. They're mostly used for finetuning but they're really important for most models. If a model doesn't contain a VAE its usually baked in, meaning its already implemented in the model without you having to download or run it!
its hard to make good face even with restore face when face is not very close
以下VAE为同一文件(same hash)
pastel-waifu-diffusion.vae.pt
kl-f8-anime2.ckpt
the full version seems like already has a VAE (I just simply compared the size on Huggingface)
I can't find pastel-waifu-diffusion.vae. Can you help me with this please?
@Noire90 Just replied to someone else, here you go samle/sd-webui-models/pastel-waifu-diffusion
@Dreamer333 Thank you very much🙏
@andite: Your huggingface user space vanished today... what happened. Will it come back?
bro's changing identity, might find him again if he makes something like pastelmixv4. lol
@EBIX No problem... too bad that some of the diffusions will be hard to find... hugging is nice but once it vanishes... we will wait and see :)
@malikxseto well, sdxl will be here after 2 days aswell, we made many friends and left many reaching here
Here's an archive of his models
https://huggingface.co/LMFResearchSociety/andite-finetunes-backup
maybe everyone point him "Anything fake creator" then he close all his work. but I think Anything v4 is best version btw.
@infrezz721 He deleted his HF for another reason which I will keep private to a certain point.
Main reasons for deleting is: Burnout, Thought no one used his models anymore, Wanted to get rid of his AI stuff.
@infrezz721 (anything v4 was a troll model, it was just a merge dont tell anyone)
@AshtakaOOf Good luck and best wishes to Andite.. Burnout sucks... I can relate... it leads you to a point that you want to give up everything.....
@EBIX It was a merge between one of his own model called Cocoa, available in the repo I sent above and AbyssOrangeMix2.
He trolled anything is one thing. At some point, he betrayed many AI model creators like me for money. Another story I don't want to go into detail. Consequence was, his models got removed on many image generator sides and he got outright banned on many discords.
@Ikena Are you talking about fantasy ai because if so he cut bridges with them a while ago
@AshtakaOOf Don’t want to go into details. Good for him
I can't find pastel-waifu-diffusion.vae. Can you help me with this please?
@AshtakaOOf Thank you AshtakaOOf. Are you sure this is the same?
@Noire90 Yes andite never trained a new VAE just for the model, and there's a reason why it's called pastel-waifu-diffusion.
@Ikena fillipino troll moment , trolled all ai creators, merged loras in model, became a base model for literally all the anime models in 1.5, made contract, became famous,got banned, disappeared , truely a legend, yet i precisely know where he is
@EBIX Must be fun to be a internet stalker
@AshtakaOOf not exactly stalking but he is in the official stable diffusion server just rarely appearing once in a while
yeah why did he private that?
@DevRev Basically he put himself into drama in an AI discord server and has chosen to move on from AI.
@AshtakaOOf damn, what kind of drama? I thought he was pretty chill guy, I lost one of his mix model and couldn't find it anywhere.
@DevRev You can get access to a repo with most of his mixes and finetunes above and more context.
@AshtakaOOf I did check all of it still didn't found it
@DevRev What model are you missing? One of the desert mixes?
I'm not seeing the pastel waifu diffusion vae. Is it the MoistMix.vae.pt? The link to huggingface is broken it seems, and the link that works doesn't have a vae
Nearly a month later, here you go: samle/sd-webui-models/pastel-waifu-diffusion
@Dreamer333 thank you so much!
When using this with deforum, colors start off strong, then very quickly become washed out. Any idea why? Should I be using a special VAE for this?
Would really appreciate some help.
Very impressive checkpoint! Do you mind letting this be used with CivitAi's onsite generation? If not, that's okay.
which is the version for training loras and which is the version for just generating?
The link to the repository is no longer working
Have one persistent problem. Whenever Im trying to generate something, the result is always pale in colors. Its strange, because the preview is bright and colourful, but somewhere around 90% of generation it becomes pale. I tried different things, such as changing samplers, steps, clip skip, size and prompt, but it never helped. I really like this model very much, but all my outputs are boring becouse of these colors. Can someone help me with this?
you're missing the VAE this model is tuned for, which is why the colors are muted. You used to be able to get the VAE here, but it doesn't seem to be on the page anymore and the other links are broken. Trawling other comments, I found this; https://huggingface.co/samle/sd-webui-models/blob/23997884993f591d36bed7c645892d3d0828017f/pastel-waifu-diffusion.vae.pt I haven't tested it, but it's likely the VAE you need. Put it in the same folder as you put your model, and unless you intend to use the VAE for another model as well, just give it the exact same name, and SD should pick up that they're associated. If you still get muted colors after that, there should be something you can find in the webui settings to force SD to use the specified VAE
@oove Thanks, it worked. I had this VAE before, but my SD just didn't use it, so your advice is just on point.
Hey, is it possible to send the 'dessert' models or upload them here? Thank you in advance!
VAE 不见了
the generated images always mixing the characters and backgrounds together, making the characters unclear? Has anyone else encountered the same problem? Are there any solutions?
This old model, despite its somewhat glitchy, unresponsive, very strong feminine focus and lack of diversity, still definitely has its charm due to its interesting original style.
Thanks for the work.
this model is a classic and still holds up really well
Details
Files
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
Mirrors
5414_pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelmix-lora.safetensors
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelmix-lora.safetensors
pastelmix-lora.safetensors
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelmix-lora.safetensors
pastelmix-lora.safetensors
51.safetensors
pastelmix-lora.safetensors
pastelmix-lora.safetensors
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelmix-lora.safetensors
pastelmix-lora.safetensors
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelmix-lora.safetensors
pastelmix-lora.safetensors
pastelmix-lora.safetensors
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelmix-lora.safetensors
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelMixStylizedAnime_pastelMixLoraVersion.safetensors
pastelmix-lora.safetensors
pastelmix-lora.safetensors
