This is lora is a personal experiments, it's bad...
I recommend using hassakuXLIllustrious_v21fix with dpmpp 2m karras (~3.5 CFG, ~35 steps) with hires fix, if you want to try this lora.
notes:
str = 0.75-1.25;
trained with illustriousXL_v01;
trigger = yuming li artstyle;
trained with fp8 unet.
logs:
v2 -> r=α=16, size=~22mb (~20x smaller than v1), target="^.*to.(q|v).*$", less img training data than v1;
v1 -> r=α=256, size=~500mb, target="^.*(to.(q|v)|ff.net.0.proj).*$".