LoRA for Maka Albarn from Soul Eater.
Capable of nsfw.
I trained it with 252 pics, which was a mix of fan art and screencaps. I trained it at 20 epochs with a dim/alpha of 32/16 and with 1 repeat.
Leans to a semi-real look mostly, but somewhat more anime at higher LoRA weights. Let me know how it goes for you. Read stuff below on guidelines and known issues :)
Triggers:
Input "MakaAlbarn, 1girl" after the LoRA near the start of the prompt.
Input "light brown hair, green eyes" for more consistency if needed, but I didn't find it necessary.
Input "plaid skirt, gloves, necktie" for classic clothes.
Input "sweater vest" for her non-combat clothes.
NB: can place in negative prompts if the clothes are stubbornly appearing.
Notes:
My experience has shown scale 0.9 ~ 1.4 works best; my best results were 0.9 for semi-realistic, but overall favourite was 1.1.
My prompt order shown in the sample pictures worked best for me.
Works on different models though weights may need to change for good results.
I used Anything4.5 model, Anything3.0, anyLora, and Midnight Maple model.
Refer to my pictures for examples and what upscaler I used, keep in mind I have used other textual inversions to help with poses and negatives.
Description
FAQ
Comments (4)
Hey, what's your tagging approach when training? Also, what model did you train with?
Hi😊 I used this model for training https://huggingface.co/OedoSoldier/animix/resolve/main/animeScreenshotlikeMix_fp16.safetensors , this is my first upload that used it.
My tagging is done with ( https://github.com/toriato/stable-diffusion-webui-wd14-tagger.git ).
Before captioning all pictures, I load and interrogate one picture of the character with a threshold of 0.25 using wd-v1-4-swinv2-tagger-v2. I then find all tags that are going to be essential to the character in all scenarios. Eg: if a girl has orange hair and green eyes then that would be consistent when generating her, so I add those to the list of tags associated with the character.
Then I add the list of tags always associated with the character and put them in the box for "tags to be excluded" essentially, I don't want any of my captions to contain any of those tags. Then add a trigger prompt in a box to append tags to my captions. That way, all those now missing tags in my captions will be entirely represented by that one trigger tag.
Now I batch process all captions for the images at 0.35 threshold and get the txt files.
"keep in mind the position of your <LoRA> matters."
How so? The LoRA weight adjustment is either applied or it isn't, and the LoRA text is erased from the prompt.
Thanks for pointing that out I appreciate it.
I honestly remember reading that somewhere and when I tried it for myself it seemed to be the case at the same seed. Hence why I believed it.
I tested it out and you're right it doesn't seem to change anything, maybe I had hirez fix on when I tested the LoRA postion thing.
Thanks for pointing it out though, I'm going to remove that statement from my description to avoid confusion :)
Details
Files
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.







