CivArchive
    Preview 337388
    Preview 337387
    Preview 337385
    Preview 835262
    Preview 337383
    Preview 337384
    Preview 337382
    Preview 337381
    Preview 337379
    Preview 835270
    Preview 337378
    Preview 337377
    Preview 337373
    Preview 835242
    Preview 337371
    Preview 337370

    AnyLoRA

    Add a ❤️ to receive future updates.
    Do you like what I do? Consider supporting me on Patreon 🅿️ or feel free to buy me a coffee

    For LCM read the version description.

    Available on the following websites with GPU acceleration:

    Remember to use the pruned version when training (less vram needed).
    Also this is mostly for training on anime, drawings and cartoon.

    I made this model to ensure my future LoRA training is compatible with newer models, plus to get a model with a style neutral enough to get accurate styles with any style LoRA. Training on this model is much more effective compared to NAI, so at the end you might want to adjust the weight or offset (I suspect that's because NAI is now much diluted in newer models). I usually find good results at 0.65 weigth that I later offset to 1 (very easy to do with ComfyUI).

    This is good for inference (again, especially with styles) even if I made it mainly for training. It ended up being super good for generating pics and it's now my go-to anime model. It also eats very little vram.

    Get the pruned versions for training, as they consume less VRAM.

    Make sure you use CLIP skip 2 and booru style tags when training.

    Remember to use a good vae when generating, or images wil look desaturated. Or just use the baked vae versions.

    Description

    FAQ

    Comments (29)

    Noobbuddy123Mar 26, 2023
    CivitAI

    Can someone please explain to me what is PRUNED version and NOT-PRUNED and what is fp16 and bf16

    Bl4ckF0xMar 27, 2023· 1 reaction

    pruned basically means "some useless data is removed", example, its missing EMA, this helps training the AI.

    sheevlordApr 21, 2023· 1 reaction

    FP16 is a 16 bit floating point number format. BF16 is similar but it sacrifices some precision to increase the range of values. Only relatively new GPUs support BF16.

    sheevlordApr 21, 2023· 1 reaction

    Wikipedia has an article about BF16. Can't post a link because of anti-spam filters, but look up "bfloat16 floating-point format" and you'll find it

    RazielAUMay 4, 2023

    As @sheevlord said, BF16 stores a wider range of values, to be more specific, it stores the same range of values that a 32bit float does, but at lower precision. Tests were done and for AI they found that the range of values is far more important than the precision, and hence, bfloat (or brain float) was born. The problem you have with a standard 16bit float is that once the value hits a certain point, it essentially gets caped to the max value supported by a 16bit floats, so you lose information which affects inference quality. It's been found that bfloat offers almost identical prediction quality to 32bit float models, but at half the cost. That is, if your hardware supports it. Also, end results are pretty theoretical, on paper it's better, but in practice you'd be hard-pressed to actually notice a difference...

    alea31415Mar 27, 2023· 9 reactions
    CivitAI

    Hi,

    This is very interesting. However, I am wondering if you could kindly provide some documented results to show that this model is indeed better as a model to train on. The SD community seems to be filled with a lot of different claims that are sometimes mutually exclusive. In the end, they are probably all correct but just treat the problem from different perspectives.
    Therefore, it would be really be helpful if you can elaborate more on this point. Thanks a lot :)

    ---------------------------------------------------
    Edit: What follows is posted as 5 star comment in march 2023. I pasted here for visibility.

    I decide to give this model some credit but I think it is important for user to know what it is good at and not good at.
    I did some experiments that can be found here
    https://civitai.com/models/26415 / https://rentry.org/LyCORIS-experiments#a-certain-theory-on-lora-transfer

    According to my experiments, this is a good model to train on if

    - You are using orange/anything series and you want to have the styles of the training images

    It is not probably the best model to use if
    - You indeed want to have style of other models, and especially that of orange when using your lora on those models
    - For some reason you want to get style of your training images on something like pastel-mix


    Of course you may get different conclusions with different setup, and you can always circumvent the weak point by adjusting various things, but this is the trend that I observe in my experiments.

    Lykon
    Author
    Mar 27, 2023· 1 reaction

    Believe me, I will if I have time :D

    alea31415Mar 28, 2023· 1 reaction

    Actually my theory is that by using a descendant model, that is, a model that contains the components of many model, you correct the “style vector” during training and thus the final style is closer to the images in the dataset. On the other hand, ancestor model like nai is better when you want to switch the style when switching models. This is what I observed before.

    I will do some experiments to further investigate the hypothesis (by picking 5-10 models to train on). By the way, we do need vae during training (to cache latents). What do you use as vae when trained with the no vae version? (I will pick the mse version for now).

    Lykon
    Author
    Mar 28, 2023· 1 reaction

    @alea31415 I'll wait for your results. It's super interesting.

    BaxterMar 29, 2023
    CivitAI

    So would you recommend also using this model for more realistic training data sets like a real person, or is this more geared towards anime characters and such? Great model btw! :)

    Lykon
    Author
    Mar 29, 2023· 1 reaction

    depends on how you tag stuff. I made some realistic loras trained entirely with artworks. See my Ganyu one

    BaxterMar 29, 2023

    @Lykon thanks for the response! I just attempted to train a LoCon with this model using Kohya but I keep getting an error, would you happen to know the reason? Training with other models seems to work fine.

    https://pastebin.com/iFbyiuRU

    ImUselessMar 31, 2023
    CivitAI

    Have you tried training with embedding?if so is the result good or is overfitting?

    Lykon
    Author
    Mar 31, 2023

    it has been long since I tried embeddings. Do that for me :)

    danGorstApr 2, 2023
    CivitAI

    Have tried experimenting in training an anime version of a real character?

    For example if you would want to make John Wick/Keanu Reeves but in anime style - would that be possible? (meaning: would the end result resemble Keanu?)

    Lykon
    Author
    Apr 3, 2023· 2 reactions

    to do that I'd use NED. Check my Jack Sparrow lora.

    597458Apr 6, 2023
    CivitAI

    How would you do what you said here "I usually find good results at 0.65 weigth that I later offset to 1."? I can't seem to find any information about it in the discord or on the Kohya readme's.

    597458Apr 6, 2023

    By the way your checkpoint worked way better to bring over a style for a style LoRA I'm working on. Compared to NAI, the difference is night and day, very nice man! 0.65 seems indeed to be the sweet spot.

    Lykon
    Author
    Apr 6, 2023· 1 reaction

    @IronCatMan it basically means that scripts are tuned for precursor training, which is weaker. Training on this will cause your sweet spot to be around 0.65 regardless of epochs. At that point you can offset it to 1, so that users don't have to manually adjust the weight.

    597458Apr 6, 2023

    @Lykon Do you know of any resources that explain how you would do that?

    Lykon
    Author
    Apr 6, 2023· 1 reaction

    @IronCatMan it's just a merge at a lower weight (about 0.8 if you want to go from 0.65 to 1)

    597458Apr 6, 2023

    @Lykon Sorry if I keep asking questions :<, what would you use for the models then, I would ofcourse assume either model A or B needs to be the trained LoRA, but what about the other one? I would also assume that for the merge and save precision you just used the same you used during training?

    Lykon
    Author
    Apr 7, 2023· 1 reaction

    @IronCatMan you can use merge with only 1 model ;)
    Counter intuitive but it works

    597458Apr 7, 2023· 1 reaction

    @Lykon  I had to use the command line, since the UI didn't allow for an empty model B. But I think it worked with using ./venv/Scripts/python.exe "networks\merge_lora.py" --save_precision fp16 --precision fp16 --save_to "F:/LoRA datasets/bla/bla.safetensors" --models "F:/LoRA datasets/test/test.safetensorss" --ratios 0.8

    Thanks for helping out, probably gonna use this checkpoint with this offset technique more :)

    PauloCoronadoApr 15, 2023· 6 reactions
    CivitAI

    So, do you think training LoRas with AnyLoRa is better than using the base SD 1.5?

    Lykon
    Author
    Apr 19, 2023· 3 reactions

    for anime, definitely

    Lykon
    Author
    Apr 30, 2023· 2 reactions

    @snow_ I disagree. All my style loras are trained with this.

    mramer723May 12, 2023

    @Lykon You are a #LEGEND

    sabrielxtouchsto2784Apr 24, 2023· 6 reactions
    CivitAI

    Whats the blend?

    Checkpoint
    SD 1.5

    Details

    Downloads
    43,067
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/26/2023
    Updated
    5/14/2026
    Deleted
    -

    Files

    anyloraCheckpoint_bakedvaeFtmseFp16NOT.safetensors

    Mirrors