Introduction
Anima Cat Tower is Anima's fine-tuned model.
This model was trained with the goal of enhancing anime style.
Please refer to the official Anima Huggingface repo about Anima.
How to Get Started with the Model
You need to use the webui that supports Anima.
ComfyUI
Forge Neo
Positive prompt
masterpiece, best quality, highres, absurdresNegative Prompt
worst quality, low quality, blurry, jpeg artifacts, lowresLicense
Description
Anima-Preview2 based
I’ve changed the scheduler used for training. This may result in more variations in the compositions.
FAQ
Comments (9)
From v0.3 onwards which based on Anima2, all my previous Anima1 based LoRA looks kinda off, even using base Anima2 model to generate Anima1 based LoRA also have the same problem. Looks like both version not really interchangeable?
@soralz The LoRA I created was somewhat compatible, but I've also heard that compatibility has been lost. I think it would be safer to recreate it using preview2.
The official documentation of the base model states that loras and finetunes created of the preview models aren't compatible with each other and will also not work with the main model once the non-preview main version releases.
It seems to handle lower step counts surprisingly well. I'm using 0.4 with only 10 steps without any kind of distill and I'm getting very solid results.
Is this compatible with any Distilled/ low step LORAs?
Do you have a basic workflow I can refer to? It works fine with Anima ver2, but it throws an error when I use your version of the model. comfyui
I had a total brain fart—I put the NoobAI version in by mistake. The problem is solved now!
Unbelievable — I can fully reuse all my training materials from NOOB AI. The required training steps and time are cut down to just 2/3, prompt contamination is almost non-existent, and I can still use the KOHYA_SS GUI for LoRA training. Prompts from the old model work perfectly too. There's virtually zero learning curve.
That said, since it's still a preview model, the variety in lighting, poses, scenes, and such seems a bit limited for now. But the future looks very promising.
But I have one question: Is this model a native v-pred model? Do I need any additional v-pred training parameters?
@NTR_BLACK Anima is DiT (Diffusion Transformer) architecture and it use not v-pred but Rectified Flow for training. You don't need to add v-pred training parameters, but there are parameters for Rectified Flow (ex. "Timestep sampling" options same as FLUX training)


















