CivArchive
    Vegeta | Dragon Ball Z - v1.0
    Preview 566517
    Preview 566519
    Preview 566518
    Preview 566520

    Resource used: https://civarchive.com/models/22530

    This was requested through Google Forms

    Weight: 0.8 - 1.0

    Trigger + Usual Appearance: vegeta, black spiked hair, black eyes

    Default Outfit: armor, white gloves

    Super Saiyan: super saiyan, blonde spiked hair, blue eyes

    This next section concerns my thoughts and addresses one of the feedback I had received from my form (and it'll also explain why the file size is larger than usual, but it's loooong read)

    I first want to start off by saying thank you to everyone that sent requests, left positive reviews, and comments. I have released so many character LoRAs (with a few styles sprinkled in there) in a span of about a week. This is all because I have Hollowstrawberry's Colab to thank for, which is always linked at the top of each model description. I am also able to get out these models so quickly because I have chosen to pay out of my own pocket to get Colab Plus. This is indicative of an addiction as you can tell lol. You should've seen how I scour and work with images (and god forbid, webp files) not typically found on booru sites.

    Now moving on to the one feedback that I received overnight. I received some resources concerning the training parameters of a LoRA model. I of course always use the base NAI model, it's the default setting in Hollowstrawberry's Colab and it's just a click or two away from seeing it. It makes the most sense because then it translates well to other mixed models (that or I am just speaking out of my ass, I just like to generate pictures). Here's the parameters that I used for the Vegeta model:

    • I had 133 images of Vegeta to work with, all of them taken from Gelbooru using Hollowstrawberry's dataset maker.

    • Resolution: 768 (been that way for 95% of all my models)

    • Number of Repeats: 14 (maybe could've done 20)

    • Epochs: 3

    • unet_lr: 1.5e4

    • text_encoder_lr: 1.5e5

    • lr_scheduler: cosine_with_restarts

    • lr_scheduler_number: 3

    • lr_warmup_ratio: 0.05

    • min_snr_gamma: checked

    • network dim/network alpha: 128/64 (went off the guide a bit since the Colab has recommended that alpha is half the dim)

    What I had done in my previous models was pretty much left everything as it was in the original Colab. Those gave me good results in CitrineDreamMix. Honestly, if I had a faster computer and more time, I would definitely include more sample images with MeinaMix, AniReality-Mix, and AmbientMix. I also went back to my Clementine model and checked the reviews. Some might've not gotten the armor completely right (I think it was the ranking tags that make up the armor? Been a while since I watched Overlord), and some might've lost her features like hair color and eye color (though, I should start saying add this or that if necessary from now on). One of the reviews also affected a part of her armor by removing the breastplate to see her breasts. I try my best for flexibility and customizability for my character LoRAs but I know it doesn't always meet this case whenever I try modern streetwear on a fantasy character.

    Now I pretty much have zero clue on what an overbaked LoRA looks like. I downloaded more than a hundred models since I created this account 3 months ago, and most of them work pretty fine and then I just kinda forget about it afterward. The only hint I got from Googling what an overcooked/overbaked LoRA looks like is the immense sharpening it has on the image, which is the result of overtraining it or something? I should've paid more attention in my machine learning course.

    The resources I got from the feedback, I'm gonna keep it. I was actually going to see if I can bake a LoRA on my laptop (the VRAM's good enough, but the time it'll take to bake isn't amazing as you can imagine). I used those parameters for here and it works fine, so why not on my laptop as well right?

    If you're a reader that made it all the way to the end (or you skipped it, which is fine anyway, you're not missing a whole lot if you don't know anything about training a LoRA, because I don't either!), then leave me your thoughts below! Are the models with the default parameters I worked with fine or do you think the parameters I worked with here are far superior? This shouldn't be of concern to me because I have a terabyte of free space on my hard drive (and 3 more on my other hard drive, but that's for something else entirely), but I'll probably turn back down the network dim and alpha. Helps to save some storage space, but if you want me to leave it alone, just let me know.

    P.S. Since I'm writing a bunch of stuff down here, I'm just gonna announce that I'll be taking a break. Sorta. Kinda. You won't be seeing any releases from me for a while, but know that I am still gathering datasets from your requests (it's usually quick anyway, depending on how much art has been published on Gelbooru). The form is still undergoing revisions. Again, thanks for all your support!

    P.P.S. Someone requested Aomine Daiki and I was planning to bake a character that shares the same voice actor as him, so... đź‘€

    Description

    Initial release.

    FAQ

    Comments (8)

    SonGokuApr 22, 2023· 3 reactions
    CivitAI

    Hey, I am the first place

    seakerApr 22, 2023
    CivitAI

    can you send me a link or tell me how you make loras, i wanted to try and make one

    justTNP
    Author
    Apr 22, 2023· 1 reaction

    Hollowstrawberry's guide can be found here: https://civitai.com/models/22530

    Let them know if you have any questions or issues, good luck baking! 👍

    seakerApr 22, 2023

    @TNP119 how did you prompt the images? i did the auto prompt thing but it didn't really describe the images well and i had to edit them but then it made 2 goggle docs for the same image. i feel like that's way to much work then i should be doing

    126275Apr 22, 2023· 1 reaction
    CivitAI

    hmm maybe try finding some different training settings? I think the model you use is heavily geared towards realism which nai is not good with since it's fully trained on millions of anime data. Unless I'm wrong you're training the lora in a way where it highly overfits so that it can be accurately represented in a realistic model (or rather a model that is already highly overfit by default), then when someone tries to use it any other model it just fries the image (oversaturates/oversharpens) and the character is semi stuck in a similar pose/composition/background/style without even prompting for it.

    also, and I'm not sure since I'm not using colab, but the NAI in colab might not actually be the animefullpruned model that everyone is using?

    another thing that comes to mind is vae; if you're baking in your vae it's usually gonna fry it for other people who are using their own default vae.

    widely considered the only relevant vae https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt but again, best not to bake it in regardless. I tried the Ritz lora and it was "ok", needed to turn down a bunch of stuff but it portrayed the character features well despite the low image number, couldn't really do anything with the character since the lower you go in cfg the more relevancy the prompt loses. Idk if any of this is helpful but you're doing a lot of cool characters for everyone so I ought to at least try

    ditaApr 23, 2023· 2 reactions
    CivitAI

    I may have it wrong, but after asking ChatGPT to explain the difference between overcooked and overfit, here's my understanding:

    Overcooked (or overbaked, or technically, overtraining) means that it's been trained for too long. Too many repeats/epochs, whatever. As a result, you get the kind of sharp, edgy look like cyan8 mentioned.

    Overfit models are ones that have learned the dataset used for training too well. As a result, they become rigid. If you train on 20 pictures of an astronaut riding a horse, then try to use that LoRA to change the astronaut to a firefighter, you're not going to get good results. And if it's really overfit, then you wouldn't even be able to remove the astronaut's helmet or view it from a different position, etc. It's too inflexible. And I think that primarily results from having too small a dataset for training. If you've only got 2 or 3 images, it's not going to learn about other possibilities that it could make and still satisfy the prompt. Or, putting it another way, it will come to think that "an astronaut riding a horse" can only wear a white spacesuit, can only wear a helmet, can only ride a brown horse, that the horse can only have its front left hoof raised, that it can only be during the daytime, that it can only be in a grassy field, etc., etc.

    All that said, I've been playing most of today with the Leona Kingscholar LoRA you made at my request. (Thanks again!) It doesn't seem to be overfit. I've been able to remove his shirt, make him lie on the bed, etc., with no difficulty at all. But the LoRA is maybe a bit overcooked. I've had to turn the weight way down to get an image that was semi-crisp with no garbage in it. That could also be because of the particular models I've been trying to use it with, however, so I can't say with certainty. (Btw, I'll post some images and comments when I get some that are shareable.)

    As for whether you should change your training parameters, that I couldn't say. I don't know how things like the learning rate or the other parameters affect overcooking and overfitting. All I can do is give my non-technical impression of working with a LoRA that you made. (I probably should've taken the AI class, which was an elective for the major when I was in school.) I might suggest that you train it a little less, although again, I haven't determined with certainty whether the issue is with the LoRA, the model, or the way the two of them work together.

    Does that help you at all?

    justTNP
    Author
    Apr 23, 2023

    Yes! Good thing being a computer science major actually helps understanding all of this.

    ditaApr 23, 2023

    @TNP119 Indeed. It's been a long time since I was in school, but I still remember a lot of concepts. My focus was more on operating systems, cryptology, and algorithms, though. I never learned the first thing about AI and ML, and it always seemed like magic to me. Since I started playing with SD, though, I've learned that "magic" is mostly just math and data structures — just like so many other things in CS.

    LORA
    SD 1.5

    Details

    Downloads
    2,659
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/22/2023
    Updated
    5/13/2026
    Deleted
    -
    Trigger Words:
    vegeta
    spiked hair

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.