AnyLoRA
Add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ or feel free to buy me a coffee ☕
For LCM read the version description.
Available on the following websites with GPU acceleration:
Remember to use the pruned version when training (less vram needed).
Also this is mostly for training on anime, drawings and cartoon.
I made this model to ensure my future LoRA training is compatible with newer models, plus to get a model with a style neutral enough to get accurate styles with any style LoRA. Training on this model is much more effective compared to NAI, so at the end you might want to adjust the weight or offset (I suspect that's because NAI is now much diluted in newer models). I usually find good results at 0.65 weigth that I later offset to 1 (very easy to do with ComfyUI).
This is good for inference (again, especially with styles) even if I made it mainly for training. It ended up being super good for generating pics and it's now my go-to anime model. It also eats very little vram.
Get the pruned versions for training, as they consume less VRAM.
Make sure you use CLIP skip 2 and booru style tags when training.
Remember to use a good vae when generating, or images wil look desaturated. Or just use the baked vae versions.
Description
NOT for training
NOT for txt2img
Only for inpainting and outpainting
FAQ
Comments (15)
A question here
how to make such a lora friendly model
Train it off a little of everything to get a neutral model, generally speaking you lose a bit of quality using a lora on a model it wasn't trained on, in fact it can be lot in certain cases, depending on how different the two models are (Photo models are not as compatible with Anime models for instance)
Sorry, new here.
I've tried generating Luffy from the generation tool on the example without changing anything and it generates a generic anime boy with short black hair, what am I missing?
Have you downloaded this Lora: https://civitai.com/models/4219?modelVersionId=6331 ? Without lora files, using hashtags like wanostyle/ <lora:wanostyle_2_offset:1> doesn't do anything, because there is no database identifying what those mean.
I wonder what is this model's base model? doesn't looks like any other model that is similar to NAI
whats the differene between the blessed version and the normal version? its never explained
I believe that refers to the VAE baked in?
it's literally called "blessed vae"
You make no reference that you also have some pruned versions but on another account here. Meaning people want know you have them unless they search for a pruned version..but why would they because they are already here are can see what you have to offer.....I found your pruned versos by accident. I think you should keep your stuff together. Makes no sense as well to not be keeping all feedback and good rating on one account to help them grow...k\just giving you my honest feedback.
Facts. I find it suspicious, so I went ahead and reported both accounts that are tied to this model.
I'm assuming you're talking about the versions uploaded by another account going by the handle "fp16_guy", in which case... thats a different account that just takes popular checkpoints and prunes/uploads them. Has no relation to this checkpoint author. Educate yourself.
What's sad is that I've seen this exact same comment on other models. It's baffling how ignorant people can be.
Please tell me which version trains Lora so well?
I think I'm already late but it may help other peeps here:
Generating images for your training dataset you need to use the "bakedVae (blessed) fp16 - Not Pruned version" to avoid desaturated images.
Then for training use "noVae fp16 version" for less VRAM usage and higher compatibility.
@warztafari thanks you very much
any version should be fine. Get the fp16 pruned to use less vram.
I'm about to upload the blessed fp16 pruned so that you can also use that one more easily.
