Flat Color - Style
Trained on images without visible lineart, flat colors, and little to no indication of depth.
ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models and workflow/training information.
This is a small style LoRA I thought would be interesting to try with a v-pred model (noobai v-pred), for the reduced color bleeding and strong blacks in particular.
The effect is quite nice and easy to evaluate in training, so I've extended the dataset with videos in following versions for text-to-video models like Wan and Hunyuan, and it is what I am generally using to test LoRA training on new models now.
Recommended prompt structure:
Positive prompt:
flat color, no lineart, blending, negative space,
{{tags}}
masterpiece, best quality, very aesthetic, newestDescription
Trained on Z Image Base with diffusion-pipe
Same dataset/training settings as used for version 2.1 [z-image-turbo]
Training Config:
# dataset-zimage.toml
# Resolution settings.
resolutions = [1024]
# Aspect ratio bucketing settings
enable_ar_bucket = true
min_ar = 0.5
max_ar = 2.0
num_ar_buckets = 7
[[directory]] # IMAGES
path = '/training_data/images'
num_repeats = 5
resolutions = [1024]
# config-zimage-base.toml
output_dir = '/mnt/d/zimage/training_output'
dataset = 'dataset-zimage.toml'
# training settings
epochs = 50
micro_batch_size_per_gpu = 1
pipeline_stages = 1
gradient_accumulation_steps = 1
gradient_clipping = 1
# eval settings
eval_every_n_epochs = 1
#eval_every_n_steps = 100
eval_before_first_step = true
eval_micro_batch_size_per_gpu = 1
eval_gradient_accumulation_steps = 1
# misc settings
save_every_n_epochs = 5
checkpoint_every_n_minutes = 120
activation_checkpointing = true
partition_method = 'parameters'
save_dtype = 'bfloat16'
caching_batch_size = 8
steps_per_print = 1
[model]
type = 'z_image'
diffusion_model = '/diffusion_models/z_image_bf16.safetensors'
vae = '/models/vae/ae.safetensors'
text_encoders = [
{path = '/models/text_encoders/qwen_3_4b.safetensors', type = 'lumina2'}
]
dtype = 'bfloat16'
#diffusion_model_dtype = 'float8'
[adapter]
type = 'lora'
rank = 32
dtype = 'bfloat16'
[optimizer]
type = 'AdamW8bitKahan'
lr = 2e-5
betas = [0.9, 0.99]
weight_decay = 0.01