CivArchive

    Flat Color - Style

    Trained on images without visible lineart, flat colors, and little to no indication of depth.

    ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models and workflow/training information.

    This is a small style LoRA I thought would be interesting to try with a v-pred model (noobai v-pred), for the reduced color bleeding and strong blacks in particular.

    The effect is quite nice and easy to evaluate in training, so I've extended the dataset with videos in following versions for text-to-video models like Wan and Hunyuan, and it is what I am generally using to test LoRA training on new models now.

    Recommended tags:

    flat color, no lineart, blending, negative space, {{color}} background

    Description

    Trained with https://github.com/tdrussell/diffusion-pipe

    Training data consists of:

    • 42 images as a combination of

      • Images used from other versions this model card

      • Images extracted as keyframes from several videos

    • 19 short video clips ~40 frames each

    Training configs:

    dataset.toml

    # Aspect ratio bucketing settings
    enable_ar_bucket = true
    min_ar = 0.5
    max_ar = 2.0
    num_ar_buckets = 7
    
    [[directory]] # IMAGES
    # Path to the directory containing images and their corresponding caption files.
    path = '/mnt/d/huanvideo/training_data/images'
    num_repeats = 5
    resolutions = [1024]
    frame_buckets = [1] # Use 1 frame for images.
    
    
    [[directory]] # VIDEOS
    # Path to the directory containing videos and their corresponding caption files.
    path = '/mnt/d/huanvideo/training_data/videos'
    num_repeats = 5
    resolutions = [256] # Set video resolution to 256 (e.g., 244p).
    frame_buckets = [33, 49, 81] # Define frame buckets for videos.

    config.toml

    # Dataset config file.
    output_dir = '/mnt/d/huanvideo/training_output'
    dataset = 'dataset.toml'
    
    # Training settings
    epochs = 50
    micro_batch_size_per_gpu = 1
    pipeline_stages = 1
    gradient_accumulation_steps = 4
    gradient_clipping = 1.0
    warmup_steps = 100
    
    # eval settings
    eval_every_n_epochs = 5
    eval_before_first_step = true
    eval_micro_batch_size_per_gpu = 1
    eval_gradient_accumulation_steps = 1
    
    # misc settings
    save_every_n_epochs = 15
    checkpoint_every_n_minutes = 30
    activation_checkpointing = true
    partition_method = 'parameters'
    save_dtype = 'bfloat16'
    caching_batch_size = 1
    steps_per_print = 1
    video_clip_mode = 'single_middle'
    
    [model]
    type = 'hunyuan-video'
    
    transformer_path = '/mnt/d/huanvideo/models/diffusion_models/hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors'
    vae_path = '/mnt/d/huanvideo/models/vae/hunyuan_video_vae_bf16.safetensors'
    llm_path = '/mnt/d/huanvideo/models/llm'
    clip_path = '/mnt/d/huanvideo/models/clip'
    
    dtype = 'bfloat16'
    transformer_dtype = 'float8'
    timestep_sample_method = 'logit_normal'
    
    [adapter]
    type = 'lora'
    rank = 32
    dtype = 'bfloat16'
    
    [optimizer]
    type = 'adamw_optimi'
    lr = 5e-5
    betas = [0.9, 0.99]
    weight_decay = 0.02
    eps = 1e-8

    FAQ

    Comments (18)

    c90parkJan 22, 2025· 9 reactions
    CivitAI

    tip: using a lower cfg scale (like 3) would make the result better.

    motimalu
    Author
    Jan 22, 2025

    Thanks! For which version - illustrious, noobai v-pred, or hunyuan?

    c90parkJan 22, 2025

    @motimalu illustrious. I haven't used hunyuan yet.

    motimalu
    Author
    Jan 23, 2025

    @c90park I see - yes I did have a rather high CFG on the illustrious previews

    5323480Jan 22, 2025· 11 reactions
    CivitAI

    This style of lora's great. Could you tell me how you find the right sample, if the tensorboard goes down and up constantly as the training progresses. How do you know which is the right one, for example what should be the % loss, or the optimal value?. I don't understand... 🙏

    motimalu
    Author
    Jan 23, 2025

    Thank you! I'm not sure what you mean by the right sample?

    For hunyuan v2 I released the final epoch (50th), as it did not appear overtrained to me - to analyse your tensorboard you could refer to this article:

    https://civitai.com/articles/83/using-tensorboard-to-analyze-training-data-and-create-better-models

    LR/optimizer config values I referenced from this article:

    https://civitai.com/articles/9798/training-a-lora-for-hunyuan-video-on-windows

    5323480Jan 23, 2025· 2 reactions

    Thanks, your loras are amazing. I hope I can make loras as good as yours! ❤️

    kira7xJan 23, 2025· 14 reactions
    CivitAI

    damn bro this looks gorgeous

    DocueiJan 27, 2025· 14 reactions
    CivitAI

    It's like the esurance commercial but up to date and more high definition, love it.

    MNecoJan 28, 2025· 15 reactions
    CivitAI

    😻💕

    redtvpeFeb 9, 2025· 14 reactions
    CivitAI

    As a lover of all things flat color, great job

    Rating_AgentFeb 13, 2025· 9 reactions
    CivitAI

    Please make style of Theobrobine animations. His 2s animations is very nice and fluid, high quality

    laakshiFeb 15, 2025· 4 reactions
    CivitAI

    hii ,its really amazing but i dont lknow how to use this in my pc can anyone tell me how to use this model in my pc??

    motimalu
    Author
    Feb 16, 2025· 2 reactions

    Hello, I assume you are asking about the Hunyuan LoRA. The ComfyUI workflow used in the previews is described here: https://civitai.com/models/1092466?modelVersionId=1315010

    Fio100Feb 16, 2025

    Please do the needful saar

    5377697Feb 20, 2025· 8 reactions
    CivitAI

    Ich leibe flat colours

    motimalu
    Author
    Feb 20, 2025· 1 reaction

    Danke!

    Bocily_PeTpoBychiFeb 24, 2025· 6 reactions
    CivitAI

    good job