RDBT [Anima]
Note: Experimental, but works.
Latest:
Anima is a wonderful model. But I don't like the anima license.
I'm fine with dual licenses, non-commercial license + commercial license. We all know that training a model needs lots of $.
And I'm a copy-left non-commercial license believer.
I don't like the fact that they keep the right to apply their commercial license to your "non-commercial Derivatives". Which means, you don't have the right to make your "non-commercial Derivatives" non-commercial (copy-left). They keep the right to "sell" your Derivatives..
Now I'm waiting for Chroma2. Which should be Apache 2.0. And is based on klein 4b. Much better than cosmos pt2.
https://huggingface.co/lodestones/Chroma2-Kaleidoscope
This model probably will not be updated. Have to save up the juice.
Finetuned + CFG distilled circlestone-labs/Anima.
Dataset contains natural language captions from Gemini. But still contains danbooru tags. Every image in dataset is handpicked by me. Contains common enhancement such as clothes, hands, backgrounds.
You must specify styles in your prompt. Dataset is not small and is very diverse. It won't give you a stable default style. If you don't specify style, the model will just give you a random/mixed one.
This is intentional. I use this model as a starting point to stack more style and character LoRA, while minimize bias.
About CFG distilled model:
Use CFG scale = 1. Prefer euler a sampler.
2x faster. Because you don't need to run a forward pass for the negative prompt.
Bias will be amplified. Which means:
Default style that do not need trigger words (it is bias) might be stronger. E.g. Styles from style LoRA.
Styles that need trigger (not bias) will be weaker. E.g. base model built-in styles.
Why LoRA?
I only have ~20k images. A LoRA is enough.
I can save VRAM when training and you can save 98% storage and data usage when downloading.
Use strength 1. This LoRA is not overfitted.
Versions
f = finetuned
d = cfg distilled.
Based on anima preview:
(2/12/2026) v0.6d: CFG distilled only. No finetuning. Cover images are using Animeyume v0.1.
(2/3/2026) v0.2fd: finetuning + cfg distillation. Speedrun attempt, mainly for testing the training script. Limited training dataset. Only covered "1 person" images plus a little bit of "furry". But it works, and way better than what I expected.