AnyLoRA
Add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ or feel free to buy me a coffee ☕
For LCM read the version description.
Available on the following websites with GPU acceleration:
Remember to use the pruned version when training (less vram needed).
Also this is mostly for training on anime, drawings and cartoon.
I made this model to ensure my future LoRA training is compatible with newer models, plus to get a model with a style neutral enough to get accurate styles with any style LoRA. Training on this model is much more effective compared to NAI, so at the end you might want to adjust the weight or offset (I suspect that's because NAI is now much diluted in newer models). I usually find good results at 0.65 weigth that I later offset to 1 (very easy to do with ComfyUI).
This is good for inference (again, especially with styles) even if I made it mainly for training. It ended up being super good for generating pics and it's now my go-to anime model. It also eats very little vram.
Get the pruned versions for training, as they consume less VRAM.
Make sure you use CLIP skip 2 and booru style tags when training.
Remember to use a good vae when generating, or images wil look desaturated. Or just use the baked vae versions.
Description
FAQ
Comments (62)
Is this model intended only for people making loras? Or is this a general model for everyone to use loras with? I'm not a lora creator, so I'm not sure which version I should download.
it's a pretty good model on its own, I left 3 examples without using loras.
It's a pretty standard anime model without a defined style, intended to be flexible.
The NOT PRUNED models are for using without making LORAs.
@andu not really. LoRA has nothing to do with pruned/full
@Lykon Thought you said this "Remember to use the pruned version when training (less vram and no baked vae)." :-)
@andu I meant training LoRA. My previous reply just meant that you can use loras on any version. And you can also train on any version. Using pruned for lora training is just more convenient.
@Lykon Gotcha. So https://civitai.com/models/23900?modelVersionId=28562 is the pruned AnyLORA version you recommend people use to train LORAs?
@andu yes, it's in the version description, on the right.
What blessed means ?
I want to train lora with dataset of 47 3d model render of similar shaped character with same angles but different styles. Should I use this model to train or use any other models. What setting to use, how much repeat and epoch and learning rate.
depends on how realistic the 3d is. If it's too realistic I suggest you try NED or DreamShaper. See my FF7 Remake style for an example.
Amazing work, would you mind to share the training setting such as the number of the images, learning rate etc.
I doubt they would do that. For whatever reason people don't like giving out their training 'secrets'. I dunno if it's some power trip thing or they just don't want to decrease their potential followers.
Lykon could be different but who knows. I personally don't see a point in keeping it private. 90% of people here don't give a fuck about commercial stuff and are probably here to view sexy anime girls
You can open a Lora in the webui (or a harder to read notepad) with the info button and pretty much have everything they used except the actual image database.
@turkey910 not at all, but I use the default settings in Lina's notebook. I usually don't reply anymore to that question because settings are not important unless you destroy them somehow.
@Lykon Ah I see, sorry if I was presumptuous. I've never heard of Lina's notebook though if you could share a link. Is it a collab like https://colab.research.google.com/github/hollowstrawberry/kohya-colab/blob/main/Lora_Trainer.ipynb#scrollTo=OglZzI_ujZq-
@turkey910 it's a colab, and pretty famous.
@Lykon Weird, can't find anything about it anywhere lmao. Honestly the HollowStrawberry has been solid for me so far. Just made a Belle using your AnyLoRA and it looks great
A friend who is training told me this model was very good for clean trainings
Training on this model is not more effective than on NAI and the promised compatibility with newer models is simply not true from my experience. I tried quite a lot of different loras that were trained on anylora with different newer and older checkpoints and the majority were pretty awful to use.
It seems like a step in the wrong direction to counter the dilution with another dilution, since I can't see a single new checkpoint having anything in common with anylora, so you're effectively training on an isolated mix.
Maybe there will be a better solution for checkpoint mixes in the future with merges , but currently you can choose to train on any mix and it will most likely produce good results if your settings are good, but lose the compatibility in the process.
NAI is just a normal sd finetune. No reason why it should be better. As long as your loras use the same caption format and the training data style is similar enough to the base model, it should work. You're probably just used to training on NAI. Anyway I never claimed this lora to be the best for everything. For example it's not suited to train on realistic datasets.
Thanks for your feedback anyway. My character loras working on most models, including realistic ones, must be magic then. This model is now used as the default training model on various scripts after being reviewed by experts in the field, and countless people on every platform have used it with success. This card has over 140 reviews of people proving it.
I suggest you try these loras trained on AnyLoRA and report any compatibility issues you find in a comment, thank you:
https://civitai.com/models/78090/mecha-musume-gundam-mecha-slider-lora
https://civitai.com/models/5477/lucy-cyberpunk-edgerunners-lora
https://civitai.com/models/5662/nazuna-nanakusa-call-of-the-night-lora
https://civitai.com/models/57301/march-7th-honkai-star-rail-anime-realistic-lora
https://civitai.com/models/6213/dark-magician-girl-lora
Well, maybe I am doing something wrong, but there is a stark difference on my end between character or style loras that have been trained on NAI and those that have been trained on anylora, while testing on different checkpoints. I am not saying you can't generate great images with this model and I'd even say it's fine to test epochs that have been trained with NAI, but I don't see how you future proof yourself by training on a model mix, since you're going to lose compatibility. The claim about training from the description doesn't seem substantiated and I hope we don't end up with every lora being trained on a different checkpoint, which you're going to need to make effective use of the lora.
I will give them a try. Just on a side note, the biggest issues I had were not models made by you. There were probably some of them which had bad training parameters, but I doubt all of them had.
@datse style loras are a whole different movie. Training them on NAI means you'll only ever get accurate styles on NAI itself or mixes that dilute NAI very little. In general a style LoRA will work best on the model it's trained on (usually, but 2 of myAnyLoRA trained styles work better on AOM2 or AnythingElse).
For characters, NAI might be better when you train on bad dataset (like made only from anime screenshots) because loras trained on NAI tend to be very weak on most models, so you get a medium-good result with decent details. If your dataset is good, training on AnyLoRA can give you perfect character details that you can't reach with NAI and a bad dataset, and that lora will work on most models, including AbsoluteReality or DreamShaper (latest versions), but also REV and NED. From my experience at least.
Making a distinction between the various dataset types, style and character loras is important.
At the end of the day, this is a free tool. The thing I was using personally and I decided to share to the others. Learn to use it or ignore it.
@Lykon I've actually found this to work fine for training people and using that to generate realistic or illustrated images of that person with a variety of different models
@jdforsythe592 I use AbsoluteReality to train people
I noticed I can now use AnyLoRA on the dropdown in google colabs. I can still use LoRA trained using this model with others like Anything4.5 right?
yes, the point of this model is to maximize lora compatibility with most anime ones.
if I want to train semi realism art style should I use this or checkpoints like dreamshaper? artstyle is close to wlop and in in the likes
depends on how strong you want the lora to be. DS is already semi-realistic, if that's too similar to your style, then the lora will be weak.
有没有大佬知道这几个版本的区别吗?
the VAE
@Lykon 能说得详细一点吗,前面四个版本不知道如何选择
@Mostima the vae is the only difference. Just look at the examples :)
1D,2D,3D,4K畫質的清楚區別而已,不是誰都能使用最新版本,要看你的顯示卡,2060以下的使用2D,3060的顯示卡使用3D版本,4090才能下載4K版本,不然你的stable-diffusion-webui沒辦法運作。有些4K版本4090的顯示都會沒辦法運作。歐美國家的電腦配置是全世界最超前,歐美國家的電腦配置什麼版本都能運作。歐美國家不是使用顯示卡,他們是使用畫圖機器設備,裡面4090顯示卡安裝7-10張。稱為顯示卡合體的超級運算機。有些版本是給專業的工作者在使用的,比如廣告廠商,畫圖工作室,等等,他們不是使用家用電腦,是畫圖運算機,我們家庭用的,3D版本就夠用了,3D已經相當清楚,家庭用的,不需要下載4K。
内股ですね。女特有ですが、いつかは、男の方にも、取り入れられるでしょう
A noob question : For baked vae, do we need to set VAE at automatic or none?
Thanks and btw great model
also which one should I download, baked blessed or baked ftmse?
@Aleximercer69 I like baked (normal), but you decide.
Yes, set to auto.
I have a question, will the images generated using this model be copyrighted?
It does not depend on the model but on the content. It doesn't matter if an image is generated with a brush, photoshop or AI, if you play any content it's already copyrighted there's no way it belongs to you.
Internationally: An artistic work can only be protected by intellectual property if it has been created by a human being. The boundary of where you created it and how much was just the tool is blurry.
The use of AI in creative processes would not affect the protection of the resulting work under the copyright umbrella if, and only if, said AI has been used as a mere tool. That is to say, only in those cases in which the margin of human intervention is such, that there is no doubt that there is a natural person behind directing the final result, will the creation generate intellectual property rights.
@eypacha thanks for the info dude
The problem is the subject, not the image: if the subject or character has a copyright you can't use it for commercial use.
@markino That is dead wrong. Search Pixiv for images of characters generated by AI. People sell those images and no one does anything to them.
@Saosaki posting fanarts on pixiv is not forbidden or copyright infringement. Selling merchandise of pikachu is. AI is not the issue. An image made with AI of an original subject is not gonna be copyright infringement. But if you make a pikachu and sell it, it is. You might get away with it, but that's the law.
Pretty sure copyright is applied when there is enough human being has enough involvement into the art. So basically if you can do painting to change the image enough it can be copyrighted by you to an extent.
In my opinion, the random nature generated using the model tool creates new material from the model author's material. If the material can be artificially created twice, or polished by the team. It must be significantly improved in efficiency, but the premise is that it needs a soulful man-made design and an organizer to guide the creation. I hope this is the result, not a random flood of products. . .
@Saosaki and? there are people who sell copyrighted images, all the time people sell things that you "can't legally", how many fake brand products are everywhere, how many t-shirts with cartoons, movies, videogames, etc. copyright. if you take @markino estatement in a hyperliteral way, it is of course incorrect: you "can" even kill people.
@VitaminS randomness does not exist in deterministic systems. Algorithms are functions, all functions accept inputs and return an output. Same input, same output (always) Computers use pseudorandom number generator (PRNG) algorithms to generate initial image noise. The proof of them is simple. All images are reproducible. If you use the same checkpoint, same parameters, the same prompt and the same seed you will always get exactly the same image. People who sell images not posting the entries does not add randomness to the results
@eypacha This isn't entirely true. There's an ongoing issue that shows up all the time on the Automatic1111 and pytorch github issues pages where using xformers (or maybe it's fast attention? I'm on an RDNA3 card so I don't have access to any of that stuff yet) produces different images for the same parameters on different NVidia GPUs because of order of completion differences with varying CUDA core counts + greedy computation of results causing rounding errors. With the resources consumer GPUs have vs. the size of some of the matmuls happening you'd run into a big speed hit ordering computation of matrix tiles the same way every time. I got the feeling nobody really cares since we're talking about generating weird images from models, not calculating the trajectory and fuel use parameters for a manned trip to Mars. I certainly don't.
Then there's the issue of FP16 eventually degrading when low vram modes are enabled since copying FP16 from CPU->GPU and back loses the "invisible" precision. The Comfy docs mention to restart the server every few hundred generations or so if you're running in FP16 and lowvram modes or results will start to become broken.
Actually I think I've seen the option to use random.org 's true random number generator based on atmospheric noise to seed the latents, or if you don't feel like connecting to the internet it would be easy enough to write something to pull numbers from the RDSEED instruction on x86-64 which pulls numbers off a non-deterministic hardware random bit generator that spits out 3Gb/s of garbage created from sampling silicon thermals and hashes it just to be safe. There's no repeating sequences there unless they're... well... random.
Then there's the thing where GPU and CPU "fast" versions of transcendental functions can vary... not to mention what backend you're using. I don't think these show up much in AI models except the schedulers but I'm not looking at any more python than I have to to get something working to find out for myself. I can't for the life of me get equivalent results in Comfy and Shark for the same prompt, seed, and image. Comfy uses DirectML on AMD with CPU offload for a couple of samplers (aka the ones to avoid), whereas shark uses a heavily optized llvm-mlir lowering that runs SD2.1 models faster on RDNA3 than any NVidia hardware can run 1.5 models (including the A100, I haven't seen tests from H100 yet) at the same resolution... assuming you only want to use a single LoRA and are willing to restrict yourself to the 3-4 sizes with tuning files, and are willing to wait 5 minutes to compile a new set of files if you change anything. I decided that until MIOpen is finished so PyTorch can start using RoCM on Windows I'll put up with complicated workflows I can do really cool stuff with in ComfyUI that take 60s per image vs the ability to spit out images every 1.2s but with very little support for changing much of anything.
Back to the actual topic the US Copyright beaureau already refused to issue copyright for AI generated images on that one comic book project and I'm pretty sure they already outright refused to issue copyright for it in the UK or EU. I'm sure somebody who generates with a model then extensively touches it up in photoshop would be covered most of the time, but I've had a couple of models spit out huge sequences of images for the same prompt that were clearly taken by the same photographer at the same location from multiple angles of the main subject (a seal). I told it to generate a "seal-cat with tabby fur" so it was effectively slapping fur and a deformed cat face on the photography. Unlike some cases where the model spits out random copyright / watermark nonense, these were all emitting the same watermark warped in exactly the same way. In some countries doing that by hand would be enough to be considered a derivative work but given how easily I spotted it I wouldn't have attempted to sell those without rights from the original photographer, whose identity can't be determined easily.
That or the series of cute cat girls in skin-tight bathing suits you generated with a realistic anime model ended up with a direct copy of somebody's 7 year old daughter's face on it and their dad finds it, pulls your address from the database of everybody in the known universe + address + emails that's floating around thanks to that MOVEIt breach and loads up his .338LM and hops in the pickup truck to drive cross country and blow your head off while parked 8 blocks away. For most people running stuff like this between gaming these days the obnoxious pulsating RGB lights in their computers will be enough to silhouette them against closed blinds so he'll have an easy shot as long as he only drinks one fifth of bourbon on the road trip instead of his usual two. :P
That's the real problem with it. Without reverse image searching the entire archive of everything that trained the model any given generated image could just be the original unmodified or barely modified image(s) depending on your prompt and how it was interpreted along with your other settings and you won't know until you get slapped with a treble damages suit on the going rights managed rate from a photographer or artist halfway across the country for selling "barely changed versions of their work with the watermark badly erased".
Until somebody trains a model from scratch that only uses creative commons / commercial use allowed and purchased royalty free images commercial use / copyright of model results will be a problem. That's why they don't provide the training image set for download anywhere, just the big list of URLs for you to fetch yourself. They're not allowed to distribute half of it. I'd be willing to bet, having been online for over 30 years now, that a run of forensics software over the drives they stored the training images on before sorting them would get Chris Hansen showing up asking them to have a seat. The SD models got away with incorporating as much copyrighted and probably some outright illegal imagery that the auto-filtering missed as they wanted to (they could have at least honored images with copyright set in the EXIF, y'know) because they were for "research purposes"... that whole thing falls apart when you start trying to do commercial things with it or anything derived from it.
Do a good painting based off your AI generated art with real acrylics and maybe I'd buy something from you (and we should really stop calling these gigantic state machine frozen networks "AI" until they're capable of continuously training themselves based on feedback and or extra image inputs as images are generated, and even then there's nothing like intelligence or thought happening at this scale).
That's just my take. Just have fun with it. You don't have to make money off of everything. It's like the damn youtube "content creator" thing. There was far more watchable content on youtube when people were just making funny stuff crap for fun and outnumbered the few corporate-sponsored channels. Now it's an army of identical used car salesman lookalikes who desperately need voice lessons giving terrible advice to try to make pennies off their video hits and an amazon referral for some cheap crap they're recommending. It's gone further downhill than The History Channel and Nick Nolte combined.
@GnomeExplorer I understand your point and I agree with a lot of what you say. I hope you agree with that classical computation operates within deterministic systems. When I refer to functions, inputs, and outputs, I mean fundamental mathematical concepts, beyond programming languages. The differences between different GPUs and other factors you mentioned are variables in this system.
Yes, I'm familiar with random.org and how they use sources of atmospheric noise to achieve a more genuine degree of randomness. This could be considered "truly random." However, the Newtonian premise would still hold that if we could replicate the same initial conditions, we would get the same mechanics and thus the same results.
Persi Diaconis, a mathematician known for his contributions to the study of randomness, has shown that coin tossing does not strictly follow the laws of mechanics when the physics of the toss is parameterized, and the same results can be reproduced in each toss.
On the other hand, Richard Feynman would argue that nature has quantum aspects. Following this line, there are programs like ANU QRNG that generate random numbers by measuring quantum fluctuations of the vacuum. This approach also provides an API for utilization.
I see that you're interested in this topic. Here's a link to a compilation I put together with material on the subject: I'm sharing this not to argue, but because if there's one topic I study and am passionate about, it's randomness. bit(.)ly /randompapers (civitai blocks the megafolder)
Is there randomness or not, it's a philosophically beautiful debate. As Robert R. Coveyou would say..."The generation of random numbers is too important to be left to chance."
Thank you for your contributions.
Give me an anime model 2d ,2.5, 3d.......
Lykon: Yes.!!!!.
Thanks for sharing. Regards
Details
Files
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
Mirrors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
Anylora.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
AnyLoRA_bakedVae_blessed_fp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
AnyLora.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
AnyLoRA.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
AnyLoRA_bakedVae_blessedVae.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
anyloraCheckpoint_bakedvaeBlessedFp16.safetensors
nylr.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.















