Flat Color - Style
Trained on images without visible lineart, flat colors, and little to no indication of depth.
ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models and workflow/training information.
This is a small style LoRA I thought would be interesting to try with a v-pred model (noobai v-pred), for the reduced color bleeding and strong blacks in particular.
The effect is quite nice and easy to evaluate in training, so I've extended the dataset with videos in following versions for text-to-video models like Wan and Hunyuan, and it is what I am generally using to test LoRA training on new models now.
Recommended tags:
flat color, no lineart, blending, negative space, {{color}} backgroundDescription
Trained on Qwen Image
Using the default diffusion-pipe qwen config
Dataset resolution of 640
Previews generated with the lightx2v Lighting 4step LoRA:
FAQ
Comments (13)
What's your opinion on Qwen as someone who's making models for it? Do you think it has potential to rival Illustrious/NoobAI?
Qwen has a great license and trains easily, very happy with my initial results testing it.
Not sure if someone would do an Illustrious/NoobAI level of illustration focused training for it - but I am here for it. ^^
Bro always deliver an update, thanks a lot!
Trying to make a qwen style lora with this level of quality. Did you train using quantization or using the full (very large) model?
Hello, yes for Qwen I used the diffusion-pipe config that uses a bfloat16 dtype and float8 transformer dtype for training with 24gb VRAM:
https://github.com/tdrussell/diffusion-pipe/blob/24c95b7e36cb1be36f810a2647f15b2304696ac1/examples/qwen_image_24gb_vram.toml#L36-L37
@motimalu Did it on just 24gb? I keep getting out of memory on my 4090
@Latterday Yes trained on a machine with a 4090 and 128gb system ram here.
Increasing the offloading "blocks_to_swap" to 16 might help reducing VRAM usage. A large amount of system ram ~64gb is also required if increasing the "blocks_to_swap".
(I'm not the maintainer of that repository, but the default qwen config should work so could consider opening an issue there if you're still having issues)
Any chance of wan2.2 versions?
Oooooopps🙂
Unfortunately, I can't make it work with Wan22 5B TI2V. I tried both T2V and I2V, Any idea?
I have to say I'm amazed of the quality of the work you put into all your loras. Are you using qwen 2509 now and can we expect those updates? Thank you
捡到宝了



