Kirazuri (Anima)
Version 2 (Latest)
A full finetune of the Anima preview3-base predominantly trained on high-resolution 1536x1536 AR buckets.
Expanded the dataset with more recent data and included the full dataset used for my previous model Kirazuri Lazuli (Noobai V-Pred).
Total training dataset of 35,537 non-synthetic images manually curated including quality and aesthetic ratings with a dataset cutoff now of 2026/04/15.
Training Details
Main training with diffusion-pipe commit: d5b78a2c49a07db8f7d9a4c795e4cfe7ba1c3dfe
Final stage for high-res used fix in commit: b0aa4f1e03169f3280c8518d37570a448420f8be
Samples seen(unbatched steps): ~680,000
Training time: ~220 hrs
Learning Rate: 4e-6 (General Training) and 2e-6 (Aesthetic)
LLM Adaptor Learning Rate: 8e-7 (General Training) and 2e-7 (Aesthetic)
Per-resolution Effective Batch size: 128 (512p), 96 (1024p), and 48 (1536p)
Precision: Mixed BF16
Optimizer: AdamW8bit with Kahan Summation
Weight Decay: 0.01
Timestep Sampling Strategy: Logit-Normal (General Training)
Tag Dropout: 30% with protected first 8 tags
Additional Features used:
Structured dataset by resolutions and manual ratings for staged training
multiscale_loss_weight=0.5andflux_shift=truefor high-resolution trainingMixed Natural Language captions with diffusion-pipe
captions.jsonformat:"image_1.jpg": [ "{tags}", "{first_n_tags}.\n{nl_caption}", "{dropout_tags1}.\n{nl_caption}", "{nl_caption}\n{dropout_tags2}" ]
Installing and running
Workflow:

Reference the anima preview base instructions. The model is natively supported in ComfyUI. The above image contains a workflow; you can open it in ComfyUI or drag-and-drop to get the workflow.
Note: Most preview images on the model card additionally use the custom comfyui-prompt-control node for schedule prompting syntax to mix concepts i.e. [word1|word2]
This custom node is entirely optional but required to exactly recreate the outputs in ComfyUI.
The model files go in their respective folders inside your model directory:
anima-kirazuri-v2.safetensors (this model) goes in ComfyUI/models/diffusion_models
qwen_3_06b_base.safetensors goes in ComfyUI/models/text_encoders
qwen_image_vae.safetensors goes in ComfyUI/models/vae (this is the Qwen-Image VAE, you might already have it)
Generation Settings
Trained in mixed resolutions for the majority of training, and finished with dedicated high resolution training.
Previews are generated mostly at 1536x1024 or 1024x1536.
1024 resolutions. E.g. 1024x1024, 896x1152, 1152x896, etc.
30-50 steps, CFG 4-5.
Same samplers as recommended for the base model work, I like to use:
er_sde: the recommended default for 30-50 steps.
sa_solver_pece: can converge with good detail in 15-20 steps.
Prompting
Like the base model, this model is trained on Danbooru-style tags, natural language captions, and combinations of tags and captions.
Tag order
[quality/meta/safety tags] [character] [series] [artist] [1girl/1boy/1other etc] [general tags]
Mostly the same order as the base model, only the [1girl/1boy/other etc] groups position is towards the end in this models dataset.
[quality/meta/safety tags] [character] [series] [artist] tag groups are also not shuffled, so their order may have some influence on generations.
Quality and Aesthetic tags
Human score based: masterpiece, best quality, very aesthetic, aesthetic
The very aesthetic and aesthetic tags are where this model diverges from the base, with the intent these can be used to guide the model toward a different aesthetic - a kind of house model bias.
Meta tags
absurdres, official art, etc
Styles
painterly, chiaroscuro, ligne claire, flat color, no lineart, blending, etc
traditional media, oil painting \(medium\), watercolor \(medium\), etc
Known Limitations & Issues:
Concept Bleeding
Some character/outfit details and concept bleeding is noticeable when using short prompts.
Longer tag strings and natural language prompts describing appearance should help somewhat with this.
Intent for future training is to find the right balance to converge faster on new data while preserving more of the existing knowledge.
Recognitions
Thanks to CircleStone Labs for the Anima Preview base model.
Thanks to tdrussell of CircleStone Labs for the diffusion-pipe trainer.
Thanks to bluvoll for support using their fork of diffusion-pipe.
Thanks to narugo1992 and the deepghs team for open-sourcing various training sets, image processing tools, and models.
License
This model is released under the same license as the base model.
See the base model for details of the CircleStone Labs Non-Commercial License.
Built on NVIDIA Cosmos
Description
FAQ
Comments (17)
Impressive, Anima is getting a lot of support, that is awesome and this model database update is impressive. But would you ever considere making an NAI or Illustrious Epsilon model with this updated database? Since unfortunately XL models still have a lot of Lora support atm, until Anima catches up that is.
Im not sure. I have spent much time refining natural language captioning approaches and even longer captioning the dataset - a similar amount of time to the model training itself using a large model with CoT enabled.
So it seems all the more a waste to train on SDXL now which is limited by its natural language understanding.
@motimalu Understandable, the closest you can get to train over for natural language is Illustrious 1.1. Surprisingly it can do natural language well, of course not at the level of Anima, but it works well. I tried one of your prompts in a model using Illustrious 1 as base (IllumiYume in this case) and it had a really similar result.
Either way, thanks for the response, your work on Anima is really impressive! Excited to see the next version or your version ones the final version of Anima comes out.
Consider using chenkin v0.5 It's the most up to date and most trained sdxl anime model currently available with full booru dataset going into february or march 26.
i just wanted to test it because of hiyuki... and i always get her from side + same look ... looks like u used the same images over and over again...thats a bit sad
Hello, thanks for the feedback
Would you mind sharing the prompt used that has the issue you described?
Anima really is the future
pretty nice! Perhaps because your finetuning dataset wasn't large enough, some artists are somewhat weaker compared to preview 3. But I think it's enough for now.
Thanks, yes it is a tiny dataset compared to the fully trained base model.
Hoped including datasets curated last year to the model would have a regularizing effect and prevent some forgetting while also helping guide the aesthetic training with more rated data, so just being somewhat weaker is hopefully a plus compared to the first version.
I really wanted to use Anime for recent characters that were not trained natively, but seems like I can now try this model, thanks :)
The characters might not be particularly strong. For instance, Diana (Pragmata) only learned about hair color, while the clothes and eyes were all randomly generated
Thanks, hope you like it!
@suede2031691 Diana (Pragmata) doesn't exist in the dataset, any character would need their name's tag frequency to be at least 10~20 to be represented.
I've added a ss_tag_frequency to the model metadata, it should let you preview the count for character tags to get an idea of what might work.
A1111 based frontends that support Anima like Forge Neo should list it in the checkpoint details tab info icon and have various plugins to support inferring those to help guide your prompts, not sure about ComfyUI.
Hi this is really The best thing I've ever seen this is better then WAI-anima But I noticed some Lora styles are different I mean, they are changing And lost the style
what updated? i see no change
dataset is 4.15, so wonderful



















