Flat Color - Style
Trained on images without visible lineart, flat colors, and little to no indication of depth.
ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models and workflow/training information.
This is a small style LoRA I thought would be interesting to try with a v-pred model (noobai v-pred), for the reduced color bleeding and strong blacks in particular.
The effect is quite nice and easy to evaluate in training, so I've extended the dataset with videos in following versions for text-to-video models like Wan and Hunyuan, and it is what I am generally using to test LoRA training on new models now.
Recommended tags:
flat color, no lineart, blending, negative space, {{color}} backgroundDescription
Trained on Anima Preview 3 base
Updated dataset of some more recent works featuring flat colors and no line art.
I think Anima has always been pretty good with flat color styles, but the outputs seem a bit saturated/high contrast so this LoRA I trained as a test seems to be worth releasing to help with that a little bit.
Training config:
# trained using diffusion-pipe commit b0aa4f1e03169f3280c8518d37570a448420f8be
output_dir = '/mnt/d/anima/training_output/flat_color-v3'
dataset = 'dataset-anima-flat.toml'
# training settings
epochs = 5
# Per-resolution batch sizes
micro_batch_size_per_gpu = [[512, 64], [768, 32], [1024, 32]]
pipeline_stages = 1
gradient_accumulation_steps = 1
gradient_clipping = 1
warmup_steps = 100
lr_scheduler = 'cosine'
# misc settings
save_every_n_epochs = 1
activation_checkpointing = true
#reentrant_activation_checkpointing = true
partition_method = 'parameters'
save_dtype = 'bfloat16'
caching_batch_size = 1
map_num_proc = 8
steps_per_print = 1
compile = true
[model]
type = 'anima'
transformer_path = '/mnt/c/workspace/models/diffusion_models/anima-preview3-base.safetensors'
vae_path = '/mnt/c/workspace/models/vae/qwen_image_vae.safetensors'
llm_path = '/mnt/c/workspace/models/text_encoders/qwen_3_06b_base.safetensors'
dtype = 'bfloat16'
#cache_text_embeddings = false
llm_adapter_lr = 0
#timestep_sample_method = 'uniform'
flux_shift = true
multiscale_loss_weight = 0.5
sigmoid_scale = 1.3
[adapter]
type = 'lora'
rank = 32
dtype = 'bfloat16'
[optimizer]
type = 'adamw_optimi'
lr = 4e-5
betas = [0.9, 0.99]
weight_decay = 0.01
eps = 1e-8







