Stable diffusion = 2GB, Trained on 5B images.
Lora = 128mb, trained on 10/100/300?????
this image, for example, was trained in 1 dim, 1 alpha, yes, 1 mb of filesize.
and also, trained with only 3 images.
a portrait of a girl on red kimono, underwater, bubbles
and this too, the style is identical and it's changes with prompt.
a portrait of a girl
a portrait of elon musk
unet_lr: 2e3network_train_on: unet_only [ for styles ]
100 repeats 5 epochs because uses low number of images.
//////////////// New training setup
my new training recipe is 1e3, unet only, dim and alpha 1.
cosine with restart / 12 cycles.
10 repeats / 20 epochs.
⚠️was trained on anime vae, so it's need anime vae or will look fried ⚠️
Description
⚠️was trained on anime vae, so it's need anime vae or will look fried ⚠️
FAQ
Comments (10)
Thanks for the inspiration! I just trained a style micro-LoRA that I hope to release after testing because of this.
This is really interesting! would you mind uploading the training data and probably the kohya_ss config in a zip?? Thanks!
link to anime vae?)
Would you please share a config screen or zip with us? It would be totally awesome if you could make a small guide though.
Hello, sorry for bothering you, I have some questions about the recipe
It is possible to train specific characters with it? I did some attempts in making a lora of a character, but it looks that specific things like the clothes and appearance are a little different of the original (like that it didn't learn correctly).
I tried 2 ways of embedding, one specifying the character appearance and clothes in the .txt and the other letting it learn from it, the one that gives the best results is the last one.
Also, I tried both 5 and 30 images to check if the reason was the amount of images, but results are nearly the same.
I should increase things like num_epochs?
I kinda don't understand where and what to edit. Can somebody explain or give the json example?
Quality of images is very, very, very lowres. Avoid.
Would appreciate a video where you explain this in detail, and also share the images, vae(s), and model you trained on.
Have you tried this method on photo-realistic character models? Conclusive results?
I want to train a type of face, bu not a specific face. For example, If I type a european woman, I get many different faces. I want my lora to create different faces of a type with variations, but not a specific face. Do you have any tips for that?
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


