The model I think is fairly self explanatory.
Works well at a strength of 1.
Previews made with novaFurryXL_illustriousV30.
Description
For this dataset I tried going with a fairly small one by comparison to the one I originally used, as I can still remember how I did it, and I guess it worked. It's not -entirely- a win, as for my comfort I use 3-6x as many repeats, but I guess it's a tradeoff because it's 1/4th the images, seems to work.
The baking method of adafactor I use now is like, half as fast as the other one I used to use, but I feel like overall it translates the style better so that things don't get 'burnt in' like character traits.
So overall everything is faster, and slower at the same time haha, kind of a balancing act.
My main success story I guess from a few months ago was developing a way to cut down the filesize from a normal lora by 1/4th without a noticable drop in quality. You can see all that in that one article I have on my page for how I set up EasyTrainingLora scripts to bake.
The nice thing this one confirms is that I can probably go a lot lower with amount of images comfortably. I guess that'll be another trial and error to see where it breaks.



















