Style1:
...got no idea what's this style called, soo its style1
AIU 0.1:
mostly like skin texture with slightly different style compared to tpony
(trigger words: sweat, wet, oiled, shiny skin)
VEY 0.1:
guess I'll have to train it again. it doesn't get a very different style (which I dun like) compared to pony base style sometimes.
ALH 0.1:
ahh not one of my favourite styles, but trained to test how it works out
CYR 0.1:
guess this is a popular style, saw lots of AI artist use this style in pixiv. not sure I got it right tho. like to see some results!
MOC 0.1:
the moc text (the original AI creator's watermark) is showing up time to time.
RMS 0.1:
feels like a model that I shouldn't have make, its just NOT a GOOD style for me, intended to create plump females, at least dataset is like that.
anyways just uploaded (use weight around 1.0 - 0.5)
kind of similar to orange mix and other 1.5 sd styles, isnt it?
STS 0.1:
for some reason "blush" works like trigger word. did enable shuffle caption tho,
so you cant create anyone without blushing face. sucks ofc, guess I'll train a one.
STS 0.4:
Okay, finally "blush" isn't the trigger word, unfortunately couldn't completely remove the trigger word, so put "TSS" as the trigger word.
PLM 0.1: BAD LORA
CKS 0.2:
requested lora. kind of similar to t-pony style. trigger word - cksxin
dataset credits for @yeyebeixin. (provided with 10GB dataset which includes about 4600 images)
RAR 0.1:
trigger word - RAR
Description
FAQ
Comments (12)
Oh, the texture is wonderfully likeable style.
I would like to use it in animagine ver, can you make it?
will it work tho? I mean I trained tpony cuz pony based models are good with multi ppl interactions, and this lora dataset included with 60-70% multi ppl images, anyways I'll try,
btw got any way to train faster? cuz it gets me like 9hours to finish 1lora. (if you know any tips or smthing lemme know)
I think if we set the learning model to 3.1, it will be fine.
The only way I can think of to train faster is to increase the batch size, but that is difficult because too large will break the bank and consume more memory.
For reference, when I create Lora, I set the batch size to "5" and let it learn.
@mumumu1295 batch size 5 impossible for me, max 2. (RTX 4070 12GB)
@Velox24 My PC has much lower specs than yours, so I can't learn locally.(Colab also takes a long time.)
It takes time, but the advantage is that you don't have to use your PC GPU locally. For proper TRAINING, you need to pay for Colab pro.
@mumumu1295 hmm I guess I'll stick to local, got no money to buy colab
Oh, animagine ver thank you.
You are a god.
I think it is great that you can make something this good, even if it takes a long time.