Version Differences:
v1.0: Deprecated /已废弃
v2.0: Based on the SDXL training standard, recommended ±3 weight, effective for most models / 基于sdxl训练,推荐±3 weight,对大部分模型有效
Kohaku_delta: Special edition trained base Kohaku Delta, adjusted 5x ratio, recommended ±2 weight use with Kohaku Delta/ 针对kohaku delta 特别训练,调整为5x ratio,推荐使用±2weight在kohaku delta上
Pony: Trained specifically for Pony,adjusted 5x ratio,, as v2.0 is nearly ineffective for Pony, recommended intensity ±2 weight./因为v2.0效果在pony上很差, 所以针对Pony特别训练. 调整为5x ratio,推荐使用±2weight
正常请用V2版本,5mb,效果更好。有名字的模型用带名字的
normal model please use v2 lora,it only 5mb and better effects than old ver. if model name in here,use it.
部分画师色彩很棒,但是其他部分污染严重,所以单独抽上色部分做个调整
The reason why creat this model is because I am particularly fond of the coloring style of some artists. However, tcertain aspects of their work is hard to say. To ensure the base model dataset remains unaffected, only the coloring aspect chosen for training.
调整动画风格SDXL模型的色彩用的Lora模型,大模型的副产品.
This Lora model that Adjust the Color/Saturation of the animation style SDXL1.0 model,a byproduct of training the base model
已分层,建议anime风格使用.
after lora blcok weight,recommend use it with anime style model.
License
This model is released under Fair-AI-Public-License-1.0-SD
Plz check this website for more information:
Freedom of Development (freedevproject.org)
Description
FAQ
Comments (6)
Could you please share how you train this lora? Thanks!
Step 1: Prepare dataset image from skilled high-saturation style artists, Process this dataset using PS to create low-saturation images. Use the same prompts for both datasets.
Step 2: Train the model using the low-saturation dataset until it overfits,merge this overfit lora to base model.(use simple and fast config, beacause the goal is overfits)
Step 3: Train the high-saturation dataset using the merged model.(normal training lora)
This lora train use 500+ image manual screening from the base model train datasets.
ref:
https://note.com/2vxpswa7/n/n2d04527bf0bc
https://pix.ink/article/k70jmzweune5m
https://twitter.com/kohya_tech/status/1657585139122335745
@kitarz Thanks! I will try later!
@kitarz Sorry again, about the same prompt between two datasets, it means 500-images use only one prompt for them, or 500-images have 500 different caption. If.500 images have their own unified cation, overfit really cost much time.https://twitter.com/kohya_tech/status/1657585139122335745 here seems each data use the same prompt: 1girl, is it right?
@ymzlygw
1.each image has its own prompts, use [tagger] and weight less tan 0.2 to generate more prompt(raw image). and copy *.txt to another dataset
2. ref is a train idea base 1.5 model, need to change while train sdxl model.
3. please check you parameter of train,close all unnecessary parameters that help combat overfit,such as [Scale weight norms] and [...dropout...] .
4.Use a high lr, dim,alpha to fast train, optimizer use adam. my config is lr 5e-4 ,dim,alpha=128 ,8batch size, constant,adam,10 Epoch ,5 repeat(number in folder name),4090 use 2h finsh train.
and in Step 3,you can try change onle Image folder use same config,convergence speed will very fast,looking test image ,maybe only need 1-2 epochs.
The goal is to train the difference between two dataset .
@kitarz Oh, I got it, thanks for your detailed response.
