This LoRA was trained on 800k images of hyper sized anime characters. It focus mainly on breasts/ass/belly/thighs/fat. This dataset is a subset of the larger hyperfusion dataset, but filtered down to body shape/size related images only.
Recomendations:
Dora/LoRA strength: 1.0 (DoRA's work in most WebUI's by now)
Resolution: ~1024px
samplers: any sampler anima supports (should be most)
in v10+ you can push the lora weight more than in v9, so do that if your concepts are not working as well as you like.
Uploaded 1.4 million custom tags used in hyperfusion here for integrating into your own datasets (eventually ill update it one last time with my latest changes)
v11 Anima_preview Release 2026/05/16:
This LoRA was trained on the original preview version of Anima (not the newly released base_1.0), however it seems to work with base_1.0 with some minor concept lost, so you can use whichever you want.
This model is technically not done training, but I want to release what I have so far, as I switch over to training on Anima_base_1.0 (5 epochs so far)
Ive added some new concepts and improved a number of tags, as well as having about 80% of the data captioned by qwen3, so I train on both tags and captions now. (see optional data for tag guide)
Its going to be a few months to continue training on base_1.0 so ill probably go silent again as it trains. I expect it won't get really good until epoch 10+, so maybe half way there (if I don't decide to start from scratch with base_1.0, we'll see).
I train with "@artist" format, so prepend @ for artist tags in v11 just like anima does
v10 Noob_vpred Release 2025/07/29:
Did you guys think I disappeared? Nope, just hopelessly training a model with a frozen text encoder for 7 months.
This new DoRA has the same concepts you are used to by now, but with a few new concepts as usual. Also 200k more images than v9.
This version is trained on NoobAI_Vpred, so there is no guarantee it will work with anything else. Especially not on non-v_pred models.
Wanted to try training with the Text Encoder frozen one last time. Also decided to stick to it no matter how long it took. And now I can definitively say I will be including TE in future models just for the sake of time. It works, but its way too slow for my setup.
Use the tag list in v9 for now, until I get around to building the new one with the small number of new concepts.
This one should handle concepts a little better than v9_sdxl, and is less prone to exploding gradients as well.
v9 Pony Release:
This model has been training for over 2 months now, but since Flux dropped, I decided to release what I have so far to free up a GPU. Technically it should have trained for longer, but I'm impatient, and some of you are probably tired of waiting anyway.
The tags are mostly the same as the last v8 release for SD1, with a few new additions like blob content for example. See the tag.csv for more in "Training Data".
Pony is a little tricky to train on, so I was experimenting a lot with this model. Because of this you should try to keep the DoRA strength near 1.0. Anything above 1.1 tends to explode. (weight regularization like scale_weight_norms is critical for training on pony, fyi)
To keep training time reasonable I trained at 768x768 resolution initially, and had planned on finishing up training with 1024px resolution, but then Flux happened. The results still seem reasonable.
I put plans and progress here every now and then.
Changelog Article Link
Description
This was a test run on the original Anima_preview model, similar dataset and training params to my last release. It should still work with anima_base-1.0 but there will be some concept loss until I re-train this on anima_base-1.0
The only major training differences from v10 were:
use LoRA-Locon instad of DoRA, since lycoris didn't support anima just yet (it does now though).
Fixed caption/tag training. The last model was accidentally using captions 70% of the time instead of tags. It is now tags 70% of the time, and captions 30% of the time.
200k more images
Using AdamWSchedulerFree instead of Adamw8Bit so I didn't have to fiddle around with a scheduler.
Unet lr 8e-4



















