Better poses for lying with elbow support
This version is intended for SDXL, a version for Pony is here: LyingWithElbowSupport Pony.
Usage Hint:
Use strength 0.8.
Triggerword:
lwes_both, lying. (further keyword 'lwes_one' to come, but currently not working.)
Build your prompt like this:
<lwes_both>, girl lying ...
Sample:
lwes_both , girl lying on beach. The girl wears ...
Even this LoRA is only trained with SFW images, it can be used with a NSFW checkpoint to produce NSFW content.
Description
Prerelase, currently only keyword lwes_both working, further keyword 'lwes_one' will come in next release.
FAQ
Comments (8)
In my opinion, for simple concepts like a pose you should use a really low rank so you don't risk the LoRa learning other things, you don't really want to teach anything new or details, just the pose. But Thanks!
Can you explain a bit more detailed? Do you mean, I should use a less network dimensions? I am very new to creating LoRAs, so maybe something you think it should be clear to me, is not really clear to me...
@niper53 Yes, precisely that. For my Stringer shirt for example, I didn't need to teach anything new, the model knows what a shirt and a tank top is. But I wanted a shirt with large armholes, and the model rarely did that. So I only wanted to guide it in that direction, not teach it any details about the character, background, color, camera, or quality. So a lower Network Rank (Dimension) works better for this. I used a 8 rank for a 1 (Network Alpha). It also has a benefit of the LoRa size being lower. The SDXL is only 50MB and the SD15 is 9MB.
But if the model actually needs to learn something new, I would recommend (for a pose) to experiment with higher ranks. My (nsfw) "male ass from behind" lora pose for example need to learn thw whole concelpt of anus and penis (that the model knows nothing about, so I used a higher rank.
In your case, lying IS a difficult concept, so maybe experiment with 16 rank or higher. I think a 324 mb Lora is probably too much. But all of this would need testing, it's just speculation from my experience.
@diogod Thanks! I'll try...
@diogod Many Thanks!!! Your tip helped me for my actual LoRA. First I created a LoRA 'RidingLawnMower' and a the base model did not know 'Tractor lawnmower' I needed a large LoRA (64 Dims) to get it teached, and it worked well. Than I wanted a LoRA 'RideQuad' and the results were bad. I think this was, because the base model knows about Quads. So I reduced to 32 Dims, and the results were better. I'll keep this in mind for further LoRAs I create.
@niper53 Glad it helped! The rank/dim is very important. Sometimes the downside can be hindered by a good captioned dataset, but if the concept is not new, the model will surely learn something in addition to your concept like colors, artifacts or seething random if the rank is too high. Best case scenario it will overcook sooner, and you will choose a previous epoch, but most of the time it will give worse results than a lower appropriate rank.
Any flux plans? This is great!
I try to release a Flux Version of all of my models. But especially lying people are very difficult in Flux, so I don't know, if I will succeed.
