CivArchive
    Preview 96180747
    Preview 96180754
    Preview 96180707
    Preview 96180927

    Flat Color - Style

    Trained on images without visible lineart, flat colors, and little to no indication of depth.

    ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models and workflow/training information.

    This is a small style LoRA I thought would be interesting to try with a v-pred model (noobai v-pred), for the reduced color bleeding and strong blacks in particular.

    The effect is quite nice and easy to evaluate in training, so I've extended the dataset with videos in following versions for text-to-video models like Wan and Hunyuan, and it is what I am generally using to test LoRA training on new models now.

    Recommended tags:

    flat color, no lineart, blending, negative space, {{color}} background

    Description

    Trained on Qwen Image

    Using the default diffusion-pipe qwen config

    Dataset resolution of 640

    Previews generated with the lightx2v Lighting 4step LoRA:

    https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-4steps-V1.0.safetensors

    FAQ

    Comments (13)

    Unhing3dAug 24, 2025· 1 reaction
    CivitAI

    What's your opinion on Qwen as someone who's making models for it? Do you think it has potential to rival Illustrious/NoobAI?

    motimalu
    Author
    Aug 25, 2025· 2 reactions

    Qwen has a great license and trains easily, very happy with my initial results testing it.

    Not sure if someone would do an Illustrious/NoobAI level of illustration focused training for it - but I am here for it. ^^

    KitagawaYoshinoAug 26, 2025· 5 reactions
    CivitAI

    Bro always deliver an update, thanks a lot!

    LatterdayAug 27, 2025· 4 reactions
    CivitAI

    Trying to make a qwen style lora with this level of quality. Did you train using quantization or using the full (very large) model?

    motimalu
    Author
    Aug 28, 2025· 1 reaction

    Hello, yes for Qwen I used the diffusion-pipe config that uses a bfloat16 dtype and float8 transformer dtype for training with 24gb VRAM:
    https://github.com/tdrussell/diffusion-pipe/blob/24c95b7e36cb1be36f810a2647f15b2304696ac1/examples/qwen_image_24gb_vram.toml#L36-L37

    LatterdaySep 4, 2025

    @motimalu Did it on just 24gb? I keep getting out of memory on my 4090

    motimalu
    Author
    Sep 4, 2025· 1 reaction

    @Latterday Yes trained on a machine with a 4090 and 128gb system ram here.
    Increasing the offloading "blocks_to_swap" to 16 might help reducing VRAM usage. A large amount of system ram ~64gb is also required if increasing the "blocks_to_swap".
    (I'm not the maintainer of that repository, but the default qwen config should work so could consider opening an issue there if you're still having issues)

    sllnAug 27, 2025· 6 reactions
    CivitAI

    Any chance of wan2.2 versions?

    alirezagame4855603Aug 28, 2025· 5 reactions
    CivitAI

    Oooooopps🙂

    jeanllSep 22, 2025· 1 reaction
    CivitAI

    Unfortunately, I can't make it work with Wan22 5B TI2V. I tried both T2V and I2V, Any idea?

    GlowingGuardianGirlNov 6, 2025· 1 reaction
    CivitAI

    I have to say I'm amazed of the quality of the work you put into all your loras. Are you using qwen 2509 now and can we expect those updates? Thank you

    7rosesNov 30, 2025· 1 reaction
    CivitAI

    捡到宝了

    LORA
    Qwen

    Details

    Downloads
    3,287
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/24/2025
    Updated
    5/17/2026
    Deleted
    -

    Files

    qwen_flat_color_v2.safetensors