V2:
I retrained it on a bigger dataset with sligtly different settings and lower rank. It doesn't destroy the picture that much when higher lora weight is used and works better with shorter prompts.
V1:
Trained on 90 pictures for 100 epochs. Works best at 0.5-0.7 strength, after that the output becomes fuzzy. It seems to prefer lenghty and detailed prompts.
Description
FAQ
Comments (2)
I love these kinds of LORAs. Unfortunately the generation times of images takes a massive hit when LORAs are above 200mb in size. any chance you could lower the size? Would be greatly appreciated
I picked the rank size (its main factor responsible for lora size) based on a gut feeling of somebody (myself), who has no experience in lora training, so maybe lower rank will work too. I'm currently experimenting with lora training for flux and will maybe try to retrain it later with lower rank.
I suppose it slows it down for you, beacuse it overflows from VRAM to normal RAM. I don't have such issue personally, but if you are using comfyui and have such a problem you can try running it wit --reserve-vram <amount> or to patch the code by incresing this value https://github.com/comfyanonymous/ComfyUI/blob/bb52934ba4e492459c5d3d01c81a8473a9962687/comfy/supported_models.py#L643









