v0.29: I changed a lot of things, v0.29 is very different from v0.27. In short:
Maximum details. This version can squeeze every single pixel out of the VAE. No more blurry or simplified output. The file size of output images are 2x bigger than v0.27, even bigger than some images from qwen-image. (means more info/details).
Increased diversity. Different seeds now can generate very different images.
See Update Log section for version info.
RDBT [Anima]
Finetuned, then distilled. It delivers faster speed and higher aesthetics with only 12 NFEs, 5x faster than base model (60 NFEs).
Sharing merges using this model is not allowed.
Usage:
Settings:
Sampler: "euler_a" "euler" "er_sde". (from smooth to high variance)
Steps: 8, 12 or 24. Scheduler: simple. Important: training timestamps are fixed. Other inference timestamps might not work.
CFG scale: 1~2. Cover images are all without CFG (CFG 1). You can enable CFG (CFG >1) if you need higher prompt adherence (e.g. style is too weak).
Prompt
Prefer natural language prompt. Prompt structure: style, subject, action, background.
Important: This model does not provide a default style. You should always prompt style. Or use a style LoRA. I don't like bake a strong style into the model, I prefer having choices. If you don't give the model style conditions, the model will give you a "averaged AI style" from dmd2. This is a "feature", not a bug.
There are two "rough" trigger words:
"digital anime illustration": 2d anime.
"digital art", 2d art but not anime, mostly digital art. (not many samples)
Quality tags:
You can omit all the quality tags. 1) The quality of training data is higher than "masterpiece". 2) Quality tags have been reinforced during distillation. Thus they don't have noticeable effects.
Same as negative tags. If you use cfg, there is no need to dump "score_1, blurry, worst quality, jpeg artifacts, extra arms,... x100 words" in your negative prompt. Those things have been distilled out.
Released models
RDBT LoRAs: Released as LoRA. For better distribution efficiency.
Update Logs
(4/23/2026) v0.27: Improved stability, details.
(4/18/2026) v0.25: It's based on anima p3.
Previous testing versions, see RDBT LoRAs
Description
Turbo (4-step dmd2)
Fast as f*** boiiiii
4-step dmd2, distilled on top of RDBT finetuned model. See the link above.
Settings:
Sampler: "euler_a" or "euler".
CFG scale: 1.
Steps: 4, 8, or 16. Scheduler: simple. Important: training timestamps are fixed. Other inference timestamps might not work.
FYI: N-step dmd2 means the model can output an image without noise after N steps. It's not a mandatory fixed setting. It's the lower limit. Lower N = stronger distillation.
This is a prove-of-concept version, to see what a 4-step dmd2 anime model looks like.
First time doing 4-step dmd2, also first 4-step dmd2 anima model. I don't know what I'm doing and what to expect.
Huge stability improvement, it even can render long text in 4 steps.
If you want to compare, I've trained:
8-step dmd2 https://civitai.red/models/2364703?modelVersionId=2832699
16-step dmd2 https://civitai.red/models/2364703?modelVersionId=2860424