PLEASE READ DESCRIPTION
Deprecated model, consider updating to: https://civarchive.com/models/147363/requiem it is also licenseless
Also, we now have an sponsor!! You can use Liberty model at their website without any hardware or installation requirements. You can also create fun animation on Mage with my mode!: https://www.mage.space/
Extremely NSFW biased model!!! But awesome at SFW too. Use it at your own risk of getting unprompted explicit images.
This is not an easy to prompt model, nor the best one for begginers.
According to one user: "If you have the model-keyword extension make sure you uncheck it. It appends all the triggers words and results in complete nonsense results. Was getting super frustrated that even copy/pasting prompts gave me trash".
Check any info or questions at our private Discord here: https://discord.gg/z88HpDwbGq
MODEL (there are two possibilities):
Liberty-Main: You should be using this model. It might not be the easiest one to prompt depending on your style. But you'll get used to it soon and is the most powerful one.
Liberty-BadClip: This version uses a broken CLIP model in the same style aEros CLIP model was broken, so outputs are very different from main version. If you really know what you are doing, or you really don't want to change your prompting style from aEros and it performs bad in main version, or are getting general bad results with main one, etc... you could try this one. Keep in mind that some UIs have problems with broken CLIPS. Also that I won't continue or support this version of the model.
VERSIONS (both models have 3 versions + 1 pix2pix version):
Standart: Intended for general use. It has VAE vae-ft-mse-840000-ema-pruuned.vae.pt baked in. There's no need to use any additional file.
Inpainting: Intended to use in the inpainting section of the img2img tab, it is mixed with original SD 1.5 inpainting model and is much more coherent whith masking areas. Only select it for that purpose. There's no need to use any additional file.
Pix2Pix: Main Version exclusive. Intended to be used as a pix2pix option of the img2img tab, it is mixed with original 1.5 pix2pix model. Only select it for that purpose. There's no need to use any additional file.
Training: Intended to use in the Train tab, for embedding creation. It uses SD 1.5 original VAE baked in, so it doesn't deepfries embeddings. Only select it for that purpose. There's no need to use any additional file.
All of the files are provided in ckpt and safetensors format for your convenience ;)
ABOUT:
Liberty is freedom. A merge with over 23 other models with a methodical, careful and genuine approach. Check 'CREDITS' section for the full list.
Freedom of prompting art or photo or both, landscapes or backgrounds or interiors, people or entities or scenes, stiff poses or movement or even mouth and facial emotions, SFW or nudes or even hardcore sex. I tried to make it as versatile as possible and merged it with half CivitAI to get the most out of any free model out there.
And it is free to use for any open source purpose, commercial or not. All the models I used were licenseless when I grabbed them, although some have changed licenses afterwards. However it is also much more modular in it's development process, which means that if any problem arised, I could rebuild it much quickier to avoid getting chained to a license or other type of problems.
HOW TO USE:
This model does not need to use any of it's trigger words. They are a tool, and the knowledge of those trigger words is deeper known and better accesed directly. So 'a photo of a cyborg woman' will work better than 'cyborgdiffusion, photo of a woman', but you can also try 'cyborgdiffusion, photo of a cyborg woman' or even 'cyborgdiffusion, photo of a knight woman' for special and unique effects. If you want to get an idea of what they can help to achieve visit the 'CREDITS' section to check the models they come from.
I mainly try natural language for my tests. You have an example prompt screenshot avaible to build from there if you want to. There's a more advanced prompting guide at my Ko-Fi: https://ko-fi.com/promptmarinersai
It is merged with UnstablePhotoreal (but with much less weight than aEros), and it reacts well to their prompting language, you don't need to use it as comma separated tags: https://docs.google.com/document/d/1-DDIHVbsYfynTp_rsKLu4b2tSQgxtO5F6pNsNla12k0/edit
As with my other models I have tested most intensively how good it is at making erotic and artistic nudes. But this time I have tried many other types of prompts too, some of them borrowed. But keep in mind that some uses are edge case. For example it is not so good at hardcore sex scenes as far as I know, you'll get more consistent and effective sex scenes at other specialised models instead of Liberty. And it won't make the best anime art out there either.
It may not be the easiest model to use, give it some time to get used to it :)
CREDITS (Many thanks in no particular order to all of the authors of this awesome models, part of Liberty, without them this wouldn't exist):
UBER v1.2: https://civarchive.com/models/2661/uber-realistic-porn-merge-urpm
Analog Diffusion: https://civarchive.com/models/1265/analog-diffusion
Artificial Journey: https://civarchive.com/models/5279/artificialjourney-v10-768
Cyborg Diffusion: https://civarchive.com/models/1365/cyborg-diffusion
David Tennant: https://civarchive.com/models/4125/david-tennant
Elden Ring Style V3: https://civarchive.com/models/5/elden-ring-style
GTM V1: https://civarchive.com/models/4525/galaxytimemachines-gtmultimateblendv1
Hassan V1.4: https://civarchive.com/models/1173/hassanblend-1512-and-previous-versions
Knollingcase: https://civarchive.com/models/1092/knollingcase
Myztery V2: https://civarchive.com/models/3947/myztery-a-class-based-fantasy-model
Postapocalypse: https://civarchive.com/models/1136/postapocalypse
Sci-Fi Diffusion: https://civarchive.com/models/4404/sci-fi-diffusion-v10
SynthwavePunk V2 (do not mistake it with the licensed one): https://civarchive.com/models/1102/synthwavepunk
Project Unreal Engine 5: https://civarchive.com/models/4752/project-unreal-engine-5
Unstable PhotoReal V0.5: https://civarchive.com/models/3753/unstablephotorealv5
Vintedois: https://huggingface.co/22h/vintedois-diffusion-v0-1
Wavyfusion: https://civarchive.com/models/1196/wavyfusion
AltroticArt's Penis Model: https://civarchive.com/models/1245/airoticarts-penis-model
Homoerotic V2: https://civarchive.com/models/1256/homoerotic
SimpMaker 3K1: https://civarchive.com/models/1258/aloeveras-simpmaker-3k-series
Let's all have fun and play together!
Description
This is the version you should be using for training embeddings:
Compatible VAE is included in the model.
CLIP is fixed.
FAQ
Comments (65)
Just trying it for a few hours, I like it, it makes beatifull portraits and nipples finally look good.
I have two questions, first could you share the prompts you are using in the example ?
Second I saw in your UI, next to the checkpoint slection you have other fields like to select a VAE. How did you did it ? I'm also using A1111.
Screenshot with parameters was shared to show you the example of a prompt. Isn't it obvious? =)
About the extra options, there's a part in Automatic webui where you can write certain config options you want to have at the main page :)
@aine_captain Can you share these options please ? I'm new but learning fast, I hope :)
@hoblin That wasn't the question but I thank you to have shared some example with your own prompts, that helped me a lot.
@hypopo Settings → User interface → Quicksettings list
You can add any settings there. I'm using this: sd_model_checkpoint, sd_vae
@hypopo this are the ones you see in the screenshoot: sd_model_checkpoint, sd_vae, sd_hypernetwork, CLIP_stop_at_last_layers
@hoblin Thanks
@aine_captainThanks
At least 1 sample prompt of the originals renders would be great as a starting point.
EDIT: I've seen the sample prompt in one of your renders, OK. Curious about the prompt of the first render: really good.
Hahaha ty!! Sadly it is part of my master-prompt thing I don't reveal. The guide to get to similar results easily is Patreon content.
@aine_captain OK, Thanks anyway.
It is an awesome model for sure please download it hehe 👍🏻
Hey! this is Roi from Mage. Can you add me on Discord? (Roi#2641) Couldn't find your Id
Am getting an error when loading the Inpainting model via InvokeAI. Any thoughts? Mains works great! It's my first model and already producing great results.
Hi there! Sorry, I have just tried both ckpt and safetensor versions in Automatic1111 and compared them to Main version too, inpainting with the 3 of them an image, and they all worked... and the model has now baked in VAE, CLIP and everything, so I think it is on InvokeAI side to fix the problem. Or if I need to do something else to the model to make it compatible, I'd need someone who can explain it to me what to do on layman terms :)
I don't use Invoke :S
@aine_captain All good! I've since switched to Automatic1111. Everything works great! One quick question however, it seems like eyes often come out beady, white or entirely black and no matter what I write as a prompt it's sort of hit or miss. Did you have any tricks to getting clear faces? (Am on Silicon and can't get Codeformer to work :c)
@qrekin47 I don't know what do you mean by Silicon. But if codeformer to work means using face restoration, I don't use it much. Some of the pics I uploaded use it because I was lazy to search for great faces without it, since after many delays I rushed a bit the launching of the model (not the development, but everything else that was not the 'technical' part).
In general without face restoration not all faces end up good, but they are much more detailed and versatile (I dare you to get a good open mouth shot like the ones I got with face restoration).
So I'd say is a thing about getting used to prompt good faces. I usually write something like 'with detailed face details and with clear eyes', and different similar things.
Give them a try and build from there.
Excuse me if its a dumb question, but is it optimal to use this model with Stable Diffusion 2.1?
It is based on SD 1.5, you cannot merge SD 2.x and 1.x models afaik
@aine_captain Thanks! Most of my images were distorted and I finally found a reason why.
Also you might be interested in that. I've had limited success with this model on 2.1. For example, I copied your prompt for the redhead wearing a copper armor to test the model and it turned out great (i can send you a picture on discord if you want). Perhaps one of the reasons it haven't worked for me most of the time is that the dpm++ 2m karras sampler is not available on SD 2.1
@frozemyass there's something I'm missing, what do you mean with using it at SD 2.1?
Did you merged it?
@aine_captain I simply downloaded liberty_main.ckpt and and dropped it into a models folder for ckpt files and it worked
@frozemyass then you are using it at Automatic1111 webui (or another UI), not at SD 2.1. SD 2.1 is another model like Liberty is.
@aine_captain my bad then
I keep getting unrelated results compared to other models. what am I doing wrong?
prompt:
(ana de armas:0.7), professional photo, dslr, (ana de armas:0.7), brown hair, freckles, perfect nose, fringe, green eyes, space clothing, space suit, (glowing dragon logo on chest:1.2), (in spaceship:1.2), metal walls, futuristic clothes, (wearing spiked orange dragon armor, blue dragon belt, dragon armor pants, dragon chokers, glowing spiked dragon gloves:1.2), action scene, dramatic pose, medium_breasts:1.2, perfect face, perfect eyes, pores, soft light,strong body, intricate detail,sharp focus, detailed background, masterpiece, highest quality, 4k, thick thighs, strong legs, tall, High detail RAW color, feminine, highly detailed, white neon lights, skin pores, smile, (high detail hair), (sharp body, detailed body), gorgeous face, real, fit:1.2, cute,<hypernet:anadearmasPt_v1:0.3>
negative:
digital art:1.2, lowres, low quality, twisted,unappealing,uneven,unprofessional,draft,fake face, fake,uneven body, unnatural face, plastic face,out of focus:0.7,out of frame:0.7, poorly drawn, crippled, crooked, broken, weird, distorted, erased, cut, mutilated, sloppy, ugly:1.2, pixelated, bad hands:1.2, aliasing, poorly drawn, sloppy, over exposed, over saturated, burnt image, sloppy, fuzzy, poor quality, pixelated, sleepy, closed-eyes, pixelated, ugly, bad anatomy:1.2, hideous, deformed, mutant, sloppy, poorly drawn, poorly detailed, smudged, sketch, pencil, doll, plastic, disfigured:1.2, close up:1.2, topless, nude, naked, cleavage, tits, belly,
Liberty can be different to prompt than other models. It depends. Definetly the images you are comparing it to can be obtained with Liberty since I've got plenty of similar ones. Either readapt your prompting ot (not recommended) you could try BadCLIP version and see if you like more that prompting style since is more similar to aEros.
Take a look at mine prompts from the first review, you'll get an idea how Liberty prompting works. It's not hard. If you don't like to change your prompting style there are version of Liberty with broken CLIP. It will behave as you expect. Take a look at the description, it has all the info you need. ⬆️
Would love an idea of what prompts got that amazing redhead standing in the city--in particular I've been trying to find a way to get consistent pubic hair, not to mention pretty redheads and it's been a struggle with other models. I'm not finding the right prompts yet for this one either it seems, but I know they're in there somewhere!
It’s interesting how everybody asking for a prompts but nobody cares to leave a review.
@hoblin would love to leave a review if I knew how well a model works. Interesting how everybody wants a review left by people who haven't figured out how to make a model worth bothering with yet.
@hoblin Yes thanks Hoblin to have shared yours and I got that "natural language" works better than using key words. But even in my mother tongue I've never been good at describing precisely a face nor an attitude, so imagine in english and also consider that Captain's mother language is spanish :)
At least, what would you recommend to have a realistic photography and not a drawning ?
I do not mind sharing specific parts of the prompt, the truth is that rarely I preserve the specific prompts for the images I make because they always come from a well curated Master Prompt, from the angel to the redheads.
For pubic hair now what works better for me was "with a natural hairy pubes", but iterations such as "with a natural pubic hair" and other similars work too.
More parts that could interest you for that image were "at a cyberpunk street food market at a neon night" if iirc. Also I used hires fix there.
Keep in mind that why we insist in natural language and keeping prompt simples is because there are no magic words. If your prompt doesn't have a good structure there's not a magical tag that will get you good pubic hair, and if your prompt is well strutcutre, sure there will be optimal tags, but almost anything will work.
@aine_captain Thanks! And I can't even imagine trying to mess around with Stable Diffusion in Spanish, despite the fact that I speak it fairly well and understand it even better. Ni me lo puedo imaginar. Hat's off to those of you doing it in your second language!
@hypopo Hello, I'm russian speaking Ukrainian =) The good prompting cames with the practice. I even created an app to help me to experiment with different words. And, given that I love to share my findings, I appreciate the right of others not to do that. Would be great if it helps somebody else to improve their skills https://github.com/hoblin/prompt-master
@hoblin I'm trying to install your app but I can't run the prompt master app, http://localhost:8080/ is not accessible with the message ERR_CONNECTION_REFUSED.
I followed all installation steps except the third one "Follow the instructions in the yfszzx/stable-diffusion-webui-inspiration repository to generate previews for the words in the prompts." as it was a bit confusing and I really don't know what to do here !
Any idea ?
@hypopo Try to follow instructions without exceptions. There are a threat on UD discord which might help.
@hoblin well I gave up with your app, but I found a workaround with the wild cards extension and text files filled from https://describingwords.io/
I use a generic prompt with the wildcards, then I can refine the one I like.
It worth a try if you didn't already ;)
@hypopo So many people gave up installing Prompt Master, so I decided to convert it into a web service =) Stay tuned!
so 2Go ! .. can't it be a LORA model ?
to be added with other model of course ! cheers ty !
A LORA? A merge of 23 other models? This is not an stylish thing, it is a complete checkpoint capable of doing many styles from photo to art, interiors or painting.
Say, does anyone know a way of making liberty generate consistent characters?
So you could see the same person on the beach, or in a boat, for example.
Use the same seed and description of the subject. I've been successful doing that to keep the same character in the image. Too many changes to the part of the prompt describing the character and it may change, and changing seed will absolutely change the character. The sampling method too for that matter.
Liberty is no special compared to other models for that. Same seed is the cheapest way but harder to get consistency.
However Liberty Training version was precisely curated to be able to train embeddings with it, and I had the specific target of beeing able to use Liberty generated characters trained IN liberty to learn their identities.
@aine_captain While I'm asking dumb questions, is there any difference between the Pickle and the SafeTensor?
@vegetarian_crocodile Pickle versions can contain viruses, SafTensors are 'safe'. But some people need Pickle versions for some programs they use.
@aine_captain Oh wow, didn't know that! Thanks a ton for telling me! Hmm, is there some way to inspect these pickle files for scripts?
@vegetarian_crocodile not sure if linking to HF or even Github is kosher with the rules here, but Google "Pickle Scanner" and you'll find a variety of links to such a tool along with tutorials on how to install and use!
What's the recommended image resolution for this model?
I use usually 512x768 without high res fix, and anything under that should be alright. With High res fix most res should work
Love that David Tennant made his way in there.
It is actually a decent sci-fi model, that's why I added, gives interesting understanding to light beams, etc.
(And I love Dr. Who).
Wow, what a bland! Thanks. Would you please consider sharing the prompt for the promo picture of redhead at the market?
Please read answers to this the same question on other comments. Thank you.
How do you merge your models? more specifically:
1) how do you pick your weights for each model merge
2) how do you test if the merge worked
3) if you merge models A,B,C,D,and E in that order for eg. and you use a weight of 0.5 for each merge, is it possible or likely that things that model A knew will be forgotten or not used much since all the other weights were so high?
Thanks
Good question, but I don't think he'll tell you. You ask too much
The way I merge is really weird. It has been the same since REA, aEros, and now Liberty... and I do lots of tests and comebacks...
Literally I cannot explain you in a simple post how it is done, nor is it near as simple as just merging the weights once.
For example, I try to ad Cyborg Diffusion to the already existing merge... then I make a first shot merging with substract difference at a 95% - 75% - 50% - 25% and 15% and test the results against a few controlled seed 4x4 greeds of same prompts as before the merge... Then I discover that cyborgdiffusion breaks a lot of anatomy knowledge in any of those merges and only manifest it's good properties above 25% addition... So I discard that branch and start a separate merge where I add a good anatomical yet artistical model to Cyborg Diffusion, such as GTM or Hassan or even F222, at different average rates... and try to find the sweetspot where Cyborg Anatomy problems are corrected yet it keeps most of it's properties, then that new merge is what I try to add back to the main merge.
Afterwards by the end of the merge, if the main merge has forgotten certain concepts I would make a new 95% MAIN model 5% cyborgdiffusion average merge to make it remebmer what it once knew...
However all of this is obsolet since we have block merge now, and that is a game changer...
@aine_captain Thanks for the answers, that's what I thought. The details you provided really help.
What is block merge? How do you use it in automatic webui and why is it a game changer?
Do you know why the stable diffusion 1.5 ckpt model and 2.1 ckpt model give pickel warnings on huggingface? Why do official models have pickle warnings? Obviously safetensor files are the ones to use but just curious if you know. Even the 1.5 and 2.1 ckpt files must be safe though from huggingface right?
@goodfun block merging is done merging only certain layers of the AI... but preserving others, which means that you can keep only certain concepts from a model without adding others. There are extensions to do this... but I'm yet learning, so I cannot recommendo you one yet, I use one called Supermerger right now, but making many tests.
I don't use huggingface except to grab a file from time to time... so cannot tell you much more.
@aine_captain Thanks for the info. Can you please clarify a few things for me?
1) How do you tell which layers of the AI network contain which concepts so you know what you can keep/merge? Is it just trial and error, or can you actually tell somehow?
2) Please let me know of your findings with block merges after you have figured things out. I'm interested in learning about what you discovered and what the best extensions you ended up on.
3) Is there a way to tell which trigger words are used/learned by a model from the model itself or do you have to rely on the author mentioning them when they post it?
4) You must have downloaded the SD 1.5 model from hugging face. Is that correct? I'm assuming you are using the safetensors version. Do you mind telling me the checksum of the file from webui. Do you also have the SD 2.1 model from huggingface? What's it's checksum if you don't mind.
Thanks again. Best of luck with the block merge :)
@goodfun Hi! Thanks for the good wishes.
1) There's this extension to check concepts by layers: https://github.com/toriato/stable-diffusion-webui-daam
2) I'm preparing a new merge model, but it is going to take weeks if not months to finish, so I recommend you following me here, on Deviant Art or on Patreon. I'm soon gonna release a Discord server, so stay tuned ;)
3) You can try to use the extension linked above, but you can just try them, not know them a priori.
4) SD 1.5 ckpt version (the one I have): cc6cb27103
I don't have 2.1 at all.
@aine_captain Thanks :)
Can someone please tell me why I'm getting distorted faces?
It might be because of the prompt. Don't you mind to share one (both positive and negative)?
if you use lora, especially anime ones in combination with realistic, you need to lower the lora <name:[value]> sometimes as low as 0.1 - 0.3, and increase the value of the trigger words by the same amount, to get rid of face distortion.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
