CivArchive
    1 mb LORA trained in 5 mins that does the same thing as 2.5 gb model but better - 1
    Preview 84808
    Preview 84468
    Preview 84473
    Preview 84472
    Preview 84470
    Preview 84469
    Preview 84842
    Preview 84807
    Preview 87325
    Preview 87324
    Preview 84471

    I made this LORA to prove my point made here. This LORA does the same thing as this model. but better since it's much more portable, 1975 times smaller in size, more versatile and doesn’t require merging and you can easily adjust the strength on the effect.

    The training was done on my phone, and literally took 5 minutes and 10 seconds, the prep for the training took another 5 minutes, i simply used the 6 examples provided in the model above for training, no captioning or anything else, i used Luisap method mentioned in my post above.

    for the examples i included, some had no prompt engineering whatsoever like in the original model, some had engineering and used other LORAs.

    Please stop making checkpoints for very specific concepts when there are other more suitable options unless there's a very good reason why you should. also when training LORAs for a single character, consIder adding other characters and maybe a style too, to make the LORA more useful.

    This is not a direct criticism of the creator of the original model (he works really hard and produces some really cool results, check them, also this is for 1.5 not 2.1), or anyone else, but i think if more people did this, it could really improve the output on this website in my opinion with much more efficient models, that have smaller sizes and can do more.

    Thanks.

    notes: the strength really depends on how closely you want the images to look like the examples in the original, 0.5-0.7 is probably a good range. and i recommend lower steps (12-18) with highres fix

    trigger word isn't necessary.

    and just to be clear, this isn't an extraction.

    Also admittedly the title is a bit clickbait-y since it's not really meant as a direct comparison to the original model really, it's just that I really feel more people need to see this.

    Description

    FAQ

    Comments (65)

    GokegFeb 8, 2023· 5 reactions
    CivitAI

    I love how this is surrounded on "newest" by two models that do exactly what you said not to do, I can't wait until people start shifting over to LoRAs properly

    Aili
    Author
    Feb 8, 2023

    this is exactly why i made this, i knew no one checks the questions section so no one will see it, so i thought no better way to prove a point than to put it in use and let people see the difference. i don't think the original creator of those models will change that since many people (me included) already discussed this exact point with him but he insists on his method, this is more for everyone else, and people unaware of the newer methods available.

    davizcaFeb 8, 2023

    The thing is although I love LORAs, it depends a lot of what the custom Dreambooth model has inside. Is not the same extracing DIM 300 of a custom 1.5 trained with 1 specific concept 1 time (10-20 images for that concept, etc.) than get a custom model, which has like 9 consecutive trainings (or mixings) since the LORA you get won't be the same and won't be doing the same effect as it's doing in the worked custom model. But yes, theyre handy, small and very useful.

    Aili
    Author
    Feb 8, 2023· 2 reactions

    @davizca This is not an extraction, it's training, I'm not advocating extraction of models (which is still very useful) I'm saying don't even do the model in the first place, train the lora instead if it's a specific concept. check my original post if you haven't for more info.

    i haven't tried the original model here since i can't run 2.1 768, but try comparing the results of both, they obviously won't be exactly the same, but i doubt the results from that model will be much better, if they turn out to be better i suspect it won't be by much, and definitely not worth the size difference and the inflexibility of the model.

    davizcaFeb 8, 2023· 1 reaction

    @Aili Forget my words. Yeah, I use Kohya_SS to train Lora with 0.0001 on both Unet and the other, clip skip 2, workers 1, batch size 2 and bf16x2.

    davizcaFeb 9, 2023

    @Aili BTW.

    Which will be your settings here?

    https://gyazo.com/91d48de8ea814dcc95c232a227ae291b

    Will be useful having a guide of Luisa and you. Really interesting in contrasting with my training.

    Aili
    Author
    Feb 9, 2023· 2 reactions

    @davizca you can check in the original post i uploaded a colab set with all the settings, but in short, unet 1e3, text 5e5, dim and alpha 1, cosine with restart / 12 cycles, 10 repeats and 20 epochs.

    i set batch size to 8, the default in the colab is 6, i think this depends on your hardware.

    Luisap's method is only unet, but i find it's much better with the text encoder and added only 300kb for me.

    his is for the 1mb method.

    for my other LORAs i do mostly default koyha colab settings, with just minor changes.

    UnionJackedFeb 8, 2023· 12 reactions
    CivitAI

    With respect I've tried djz Inifinite Supply and your Lora and djz Inifinite Supply is so much better and higher quality. Simple logic will tell you that a 2gb file will always be higher quality that a 2mb file. Sorry your Lora is good but all you proved is you can make a Lora.

    Aili
    Author
    Feb 9, 2023

    I'd be really interested in seeing the results of both since i never ran the original model.

    I think if you don't do any prompting at all and set the LORA strength to 1 you wont get very good and versatile results.

    but the few results i had from my experimenting were very similar in quality to the examples provided in the original model. i also think the examples i made and uploaded here look decent even though i made them relatively quickly, i'm sure with more careful prompting you can get even better results.

    I don't think the size argument has any merit here, it really doesn't have much to do with the quality, that's not how it works, LORAs work with other models, and i have trained checkpoints in the past that were 4gb in size that were crappy, and a 25kb embedding had better results than that 4gb model.

    axsthxticrootFeb 9, 2023· 7 reactions

    @Aili Yikes man. if you never ran the model, you never used it to make images, this means you took my preview images, and trained an extremely limited LORA. I mean cool story bro.
    I'll use your LORA with one of my actual models and see what happens. Thanks for your energy i guess ;) maybe try using the tools next time, or just give up.

    You failed bigtime on this one.

    Aili
    Author
    Feb 9, 2023

    @axsthxticroot the results speak for themselves, but ok.

    driftjohnsonFeb 9, 2023· 5 reactions

    @Aili yes, they do. You made a new LORA from demo images. This is not the same as my model and cannot do what it does. Results do indeed speak for themselves.

    You should not really link to my work, when it had no bearing on it or resemblance to it tbh.

    Aili
    Author
    Feb 9, 2023

    @driftjohnson dude, the feature to mute certain creators on this website was requested on reddit a while back specifically to mute your posts, so that kinda says something about the quality of your work huh? honestly you are clearly trolling i see no point in wasting my time with this.

    the results are there, and people can try for themselves despite your blatant lying and tampering.

    good day.

    SpliffnColaFeb 9, 2023· 2 reactions

    @Aili Feeling is mutual mate, you're like a child throwing a fit when they're caught in a lie.

    Aili
    Author
    Feb 9, 2023

    @SpliffnCola ok

    magpyFeb 9, 2023· 5 reactions

    Simple logic doesn't tell you that at all. 2MB is huge if you consider that a 2GB model file contains the information necessary to visualize practically any object, scene, style or figure in existence. 2MB in that context represents about 1/1000 of the space of all its knowledge. Adding a handful of images into the mix of the 2.3 billion it was initially trained on should not add 2GB worth of information.

    If you know anything about linear algebra it should be obvious that matrices can contain more dimensions than necessary to represent the same (or roughly the same) transformation. LoRA just eliminates those, and stores the transformation separately instead of baking it into the initial weights.

    scruffynerfFeb 9, 2023· 19 reactions
    CivitAI

    Um, you do see that the original model is 2.1....

    and your lora is 1.5

    #FAIL

    You lose, sorry, try again next time.

    Seriously, Drift is doing amazing models... using 2.1 which is a better engine.
    Make it a 2.1 lora and we can compare.

    Aili
    Author
    Feb 9, 2023· 3 reactions

    I literally say in the description that this is different, this is 1.5 and that one is 2.1, i also say that djz's work is really cool.

    the intention was never to step on anyone's foot that's also why i didn't do a 2.1, i'm not trying to "steal" downloads, or steal anyone's work, this LORA isn't going to affect the original's downloads since this based on a different model.

    it's simply to raise awareness about this and to show you can get good results for specific concepts with a much smaller size and relatively easily, I don't see the failure in this.

    boyetosekujiFeb 9, 2023· 8 reactions

    sd 2.1 sucks anyway, images look like made of fiber, wool

    vhttpsFeb 9, 2023· 6 reactions

    2.1 sucks.

    driftjohnsonFeb 9, 2023· 6 reactions

    @davizca twiddles moustache

    scruffynerfFeb 9, 2023· 5 reactions

    @boyetosekuji maybe it's you. just maybe.

    axsthxticrootFeb 9, 2023· 5 reactions

    Types " djz "

    SpliffnColaFeb 9, 2023· 2 reactions

    @Aili You should work on your communication then because your intentions aren't lining up with your words and actions on this website. If the intent was to lift up the creator and raise awareness of LORA then there was no need for your antagonistic language in your description or in ALL of the comments I've read for this LORA page from you.

    Aili
    Author
    Feb 9, 2023· 1 reaction

    @SpliffnCola I don't think my language in the description was antagonistic at all, i even praised their work, i just didn't agree with the method, maybe i was antagonistic to the method but i don't see an issue with that since it isn't personal, and it wasn't aimed at them in particular even, in my comments i was extremly reasonable until today when after pointing out that their review was misrepresenting the results since they used a ridiculous 1.7 weight for the LORA, which i thought must be a mistake, but instead they continued their attack and doubled down, which honestly feels like they were being antagonistic and so i did the same in return after much give and take. look at my earlier comments. my last comments are admittedly antagonist, but it was a natural result for how they responded and acted.

    that said, sure maybe i could have been a bit more tactful, and maybe i should work on my communication in general, thanks for the advice

    scruffynerfFeb 9, 2023· 3 reactions

    @Aili DJZ is an active member of our Discord community... and contributes to the growth of the community.... you? not so much. So perhaps you would do better to learn to be a good neighbor and play nice with others.

    scruffynerfFeb 9, 2023· 1 reaction

    @Aili also, using weights like 1.2 or 1.5 or 1.7 work AMAZINGLY well, go look at the results of doing this with microworld lora I did (with active consent of the creator of it) 

    packetFeb 9, 2023· 6 reactions
    CivitAI

    I think what would help people shift from creating checkpoints to LoRAs would be an updated simple guide for setup combined with a more in-depth guide that covers multiple use cases (like creating a LoRA for an art style vs creating one for a character). I had to piece information together from multiple sources to create something useful. Granted, this is the way of all things SD related for the time being but it was still more time consuming than expected, and I can see people giving up on it easily.

    Aili
    Author
    Feb 9, 2023· 1 reaction

    I'm by no means an expert so i don't know if i should be the one to do it, but i might look into making a simple tutorial, although if someone can make a video it would probably be better, i can help whoever is willing to do that as well if there are volunteers

    Radical0reamerFeb 9, 2023
    CivitAI

    Very interesting post. But how did you make an 1mb LORA.SafeTensor but not pt. file? How did you make such small and ready for use file without extraction?

    Aili
    Author
    Feb 9, 2023

    thanks, check my original post, its based on luisp's method, i linked it there.

    arnobhaque305Feb 10, 2023· 4 reactions
    CivitAI

    This has caused more drama than was necessary, but I absolutely agree with the idea of minimizing storage requirements where possible. There are so many cool Dreambooth models out there, but I'm often discouraged by the large size. You can always try merging, but LoRAs have the benefit and possibility of dynamically running while staying apart from the base model.

    Aili
    Author
    Feb 10, 2023· 2 reactions

    Honestly i agree, i did think some might disagree with me, but i never thought it would ger personal at all, it was a general criticism of a method and advocating for what i think is a better alternative.

    i mean sure i could have just kept my original post up without posting this, but no one would read it, i think this definitely made the issue for visible with people, especially since i put what I'm advocating for in practice.

    davizcaFeb 10, 2023· 2 reactions

    The thing is that you can also Merge LORAs... In terms of what a LORA can do (and cannot) and a Dreambooth can (and cannot) the way they work in outcomes are very very very similar. Probably more than some people thought and even some cases the LORAs are not so overtrained (after a extraction from an overtrained model). From what I saw, LORAs will eat Dreambooth soon but scene also need Dreambooths to make proper extractions. Example, I trained several custom models (specific) just to extract the LORA of them and apply into a good custom Dreambooth model. This could be worth in terms not only size... style, variability, etc.

    So.. probably more people will do temporary dreambooth to extract the LORA instead of training them (Which is the main thing and this was more an idea).

    arnobhaque305Feb 10, 2023

    @davizca I'm aware you can merge LoRAs as well, but I prefer not to, due to the ability to dynamically run as I'd mentioned. In addition, even if a LoRA is overtrained, it becomes conveniently quick to try different strengths and monkeypatch.

    And do you happen to have comparisons of your dreambooth extractions against the base models? I personally tried taking the difference between sd1.5 base mixes just to see what it'd be like but never got around to trying the technique on dreambooths, where it'd make sense.

    davizcaFeb 10, 2023· 2 reactions

    @marquis_de_gucci Still I'm messing into that. Need to solve the question: Better to train a single LORA or better to make a custom dreambooth and extract the LORA from that? Using same datasets... For now my conclusions are... Need to do more tests.

    nejFeb 10, 2023· 2 reactions
    CivitAI

    how do you make a lora on your phone? I cant even figure out how to make the on a shitty 1050 gtx..

    Aili
    Author
    Feb 10, 2023

    using a google colab, check my original post in "questions" for more info

    r3l4xFeb 10, 2023· 5 reactions
    CivitAI

    I'm so happy people are bringing this up. So many lazy people here on civit that are content with their 3gb waste of space models (djz)

    Aili
    Author
    Feb 10, 2023

    i honestly have no idea why they are so insistent on their method even after much back and forth with them.

    i do think thats at least theirs is "meant" to be merged, whixh is better than many other models that have a specific concept without even the intention of merges, but i still don't see how it could be in any way a good idea or any better than the alternatives.

    r3l4xMar 29, 2023

    @Aili I hope you have learned by now...

    odawgthatFeb 28, 2023
    CivitAI

    This is super impressive, would love to know your workflow and how you manage to squish a 2.4GB file into 1.5mb! I only work in loRA's too because they are WAY more flexible than checkpoints. Thanks for proving to everyone how powerful they can be

    Aili
    Author
    Mar 1, 2023

    thanks! i pretty much explained everything in the description, i can answer specific questions if you have them.

    in this lora in particular simply using the 6 example images worked since it's such a specific concept, and that was my point, if it's so specific you don't need a checkpoint for it

    odawgthatMar 1, 2023· 1 reaction

    @Aili You legend, only needed 6 images to get the same results as a checkpoint. Love your work, keep it up!

    Aili
    Author
    Mar 1, 2023

    @odawgthat Thank you !

    odawgthatMar 3, 2023

    @Aili I have a question, I need to train a lora on a very specific image, it is a pattern and I have rotated it several times and inverted colours to give me 12 unique images. What settings would you recommend to get similar results to this every time? Also, would it be possible to make it 1mb?

    Aili
    Author
    Mar 3, 2023

    @odawgthat for all my LORAs i either the basic Kohya colab settings with only tiny modifications depending on the LORA, or i use LuisaP method (mentioned in my original post linked here in the description), that gives the 1mb LORA.

    i think the regular method gives better results, but the file size is 144mb, if it's an important LORA for me and i want it to be more accurate and don't care about the size, i just do the regular method, otherwise the 1mb method can give good results as well.

    try with the luisap method, with only 12 images it probably won't take more than 15 mins, if it doesn't give good results try the regular method.

    also make sure you let it save every 4 or 5 epochs, to give yourself more options and try it out at different stages of training.

    odawgthatMar 3, 2023

    @Aili Thats some good advice, I tried his method but it didnt work, I didnt have an option to change the alpha to 1, there is something called network alpha but that only goes down to 4. I managed to change all the other settings though and I set the text encoder to 5e-5 like u said

    Aili
    Author
    Mar 3, 2023

    @odawgthat are you using the colab version or the auto ? on the colab the network alpha can go down to 1, not sure about auto as i never used it though.

    odawgthatMar 3, 2023

    @Aili I am using koyha_ss gui locally, btw, if I wanted to generate images that were 95% close to the original images what learning rates and settings should I use? Also do you know how to get images symmetrical?

    Aili
    Author
    Mar 3, 2023

    @odawgthat not too sure tbh, i mostly use the two methods i mentioned, but my guess is try the regular method, maybe increase the repeats more than 10 ? never tried that but it could work no idea. you probably need to train it a lot though and keep on experimenting with the results as the training goes along, maybe 40-50 epochs, for symmetry, adding symmetrical in the prompt and going hard on the weights should do it, as for training, i think it just depends on the dataset i think, i don't think you can do anything else to influence symmetry while training, maybe if you had some that are symetrical and some that aren't you could manually caption the symmetrical ones and add "symmetrical" to their captions

    odawgthatMar 4, 2023· 1 reaction

    @Aili Thanks a lot for the advice! Ive tried all sorts, ranging from 100 steps to 3000, it gets close but it never seems to be able to recreate the images :( I appreciate you replying though thx for your time. I guess ill just have to keep on experimenting

    Aili
    Author
    Mar 4, 2023

    @odawgthat if you'd like send over the dataset I'll try luisap method and see if i get any better results

    Aili
    Author
    Mar 4, 2023

    @odawgthat btw I've never tried this, but have a look at it, you can train an embedding on a single image. https://github.com/7eu7d7/DreamArtist-sd-webui-extension

    odawgthatMar 4, 2023

    @Aili Oh wow! I better try this. Thx a lot man

    DJHILMARMar 26, 2023· 3 reactions
    CivitAI

    Genial amigo!!! haces muy buenos trabajos! po favor deberías de hacer un tutorial ya que en español no lo hay

    Aili
    Author
    Mar 26, 2023

    sorry mate i don't speak spanish, i can help you make a tutorial in spanish if you speak english

    DJHILMARMar 30, 2023

    si en ingles por favor si... te lo agradeceré mucho, en ingles no hay problema. por favor amigo.

    Aili
    Author
    Mar 31, 2023

    @DJHILMAR not exactly a tutorial, but i had this post that should guide you in the right direction, feel free to ask questions.
    https://civitai.com/questions/148/guideish-for-training-a-stylemultiple-characters-in-a-single-lora-my-tips-and-thoughts-for-creators-on-improving-the-content-on-this-website

    sonuganim807Apr 10, 2023· 5 reactions
    CivitAI

    Dude, I'm impressed... really...I never commented on anything before on civitai. This is just so great, Are you on discord???

    Aili
    Author
    Apr 12, 2023

    thanks, i do have a discord but i don't use it often

    AdriBocApr 19, 2023· 1 reaction
    CivitAI

    It's not only impressive in itself, it's also made by the guy who created the Tokiame Lora. Much Respect deserved ^3^

    Aili
    Author
    Apr 19, 2023

    Thank you !

    ParrleyQMay 6, 2023· 8 reactions
    CivitAI

    so I was wondering around, trying to collects info before I start making a lora for my own uses, I wonder if this method you mention still works today? If possible can you make a more details guide that go through the process u making these kind of lora?

    flytherionApr 11, 2024
    CivitAI

    I cant belive this is made in such short time. That is incredible!

    LORA
    SD 1.5
    by Aili

    Details

    Downloads
    6,085
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/8/2023
    Updated
    5/13/2026
    Deleted
    -
    Trigger Words:
    doortoinfinity

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.