AbsoluteReality
That feeling after you wake up from a dream
Add a ❤️ to receive future updates. This took much time and effort, please be supportive 🫂
Do you like what I do? Consider supporting me on Patreon 🅿️ or feel free to buy me a coffee ☕
For LCM read the version description.
Additional ecamples for V1.6: https://civarchive.com/posts/353121
Additional examples for V1: https://civarchive.com/posts/259634
Amazing gallery by qf22: https://civarchive.com/posts/260939
Quick face alteration examples using celebrity names and mixing them: https://civarchive.com/posts/274268
Available on sinkin.ai, Mage.space and many other services
Suggestions
Use between 4.5 and 10 CFG Scale and between 25 and 30 Steps with DPM++ SDE Karras. Worse samplers might need more steps.
To reproduce my results you MIGHT have to change these settings:
Set "Do not make DPM++ SDE deterministic across different batch sizes." (mostly for v1 examples)
Set the ETA Noise Seed Delta (ENSD) to 31337
Set CLIP Skip to 2
DISABLE face restore. It's terrible, never use it
Use simple prompts. Complex prompts might make less realistic pictures because of CLIP bleeding. More complex prompts does not mean better results. Keep it simple.
Use ADetailer to enhance faces. Basically every solo portrait I made uses it. You can get my settings by clicking on "copy generation data". I suggest you use denoising under 0.3 to avoid getting always the same face.
Use BadDream and UnrealisticDream negative embeddings (
BadDream, (UnrealisticDream:1.2)). Add weight to UnrealisticDream between 1.2 and 1.5. Do not use FastNegative or EasyNegative if you aim at realism. However, they're good for artworks.Use Highres.fix with the following settings:
Denoising strength: 0.45, Hires steps: 20, Hires upscaler: 8x_NMKD-Superscale_150000_Gand as much upscale as you can (my gpu only handles up to x1.8 at 512x768 base resolution, but you can go higher). If you don't have8x_NMKD-Superscale_150000_Gyou can probably use another GAN, but it should be easy to find on Google. You can also try Latent with a denoise higher than 0.6, but the result will be harder to control.Try to condition faces by prompting for eye colors, hairstyles, hair color, ethnicity and so on. Even celebrity names do work. This model is pretty good at not making a single face if you play with the context.
If the pic is too clean, try to add some ISO noise. Even as a post processing with external tools it will trick the brain enough to make you think "damn, this is a real photo.
If you feel the grounded nature of this model is limiting your imagination, try generating on DS6 and then do img2img with this one to bump up the realism.
Pruned vs full comparison (not highres fixed)
Brief story of this model
While working on DreamShaper 6 I made many other models and a crap ton of tests. Some of them were strange mixes with photorealistic models I had previously made, plus new ones and some ISO noise LoRA I made. When I tested the various release candidates of DS6 for photography prompts, this came out on top. While mostly on the losing side with regards to range, flexibility and LoRA compatibility compared to what became DS6, I noticed this was pretty good at recreating photos with very simple and minimalistic prompts. So, why not, I gave it some love and kept working on it, just in time for its initial release.
Difference with DreamShaper
Long story short, DS aims at art, this aims at realism. They might overlap a bit, but they have different objectives and different things they're good at. DreamShaper is total freedom and can basically do everything at a high level. This does about one thing and does it extremely well. This still uses DreamShaper as base, so it's capable of doing art to a lesser degree.
Description
FAQ
Comments (29)
what is different between these v1.6 s?
first one is the normal mode, inpainting is the inpainting model (just for inpainting, not for normal inference), last one is diffusers format for invokeai or python code.
Looks very interesting. I'm somewhat new to this, so can anyone please tell me if this is available on Hugging Face for use through Google Colab?
it's on my HF
sorry gus i am ne to this, how can i use it ?
1. Easiest way to run:
Use it on tensor.art: https://tensor.art/models/610393339264646876?source_id=nz6wrl_ilUOxrfQuYHn89hIm , but daily has limited credit.
2. If run on local machine or own colab, refer to this very good guide:
https://stable-diffusion-art.com/beginners-guide/
If not sure how to prompt, just copy from other samples u like and remix from there.
@MontySori that's a different model :D
@gogi01 you can use it for free. Just install auto1111 webui and download the model and put it into your stable diffusion folder. You can also download other parts of the ecosystem I'm using and I shared. A recent guide I found: https://www.youtube.com/watch?v=tP5yy6A4GJw&lc=UgyCjFvql0xP1OqaS194AaABAg
Looks like my meanwhile unpublished model found its way into this new merge ... which really isn't a bad sign at all ;-). Still like skin details and eyes much more on 'Conscious Medicine'.
this is not a merge o_o
"To reproduce my results you might have to change these settings: Set "Do not make DPM++ SDE deterministic across different batch sizes"
Dude, I've spent an hour trying to figure out why I'm not getting the same results as you, given that I've met all the requirements.
Oh my god, it turned out that I had to uncheck "Do not make DPM++ SDE deterministic across different batch sizes", as it was the default. And not to set it as you wrote it in the requirements. Embedding is also not mentioned in the description.
sorry, I'll review that description. Truth is I wrote that when I made v1. I've long since disabled that setting :)
By the way, notice I wrote "might"
And Embeddings are mentioned by the way, "Use BadDream and UnrealisticDream negative embeddings (BadDream, (UnrealisticDream:1.2)). Add weight to UnrealisticDream between 1.2 and 1.5. "
incredibly good work. the best model i have come across, better than reality. and believe me, i had them all ;-)
Agreed, this model is amazing
C'MON, lets all do some Bird and a Girl. Show us what you got. I used saeed393's prompt from below.
You said ''If the pic is too clean, try to add some ISO noise. Even as a post processing with external tools it will trick the brain enough to make you think "damn, this is a real photo.''
Where exactly can I find/add this setting?
He means, add some noise in Photoshop or some other external program. There are also "film like" LoRAs you could try before popping it into Photoshop
@qudabear200 I added ''film grain'' in my prompt if thats enough.
@qudabear200 yeah but they tend to change the subject too, so I don't advise using them. Adding iso noise with photoshop is easy enough. However newer version of AR can simply do it with "film grain" in the prompt.
@Beast92 yeah it helps
Was working excellent, now getting the following error: NotImplementedError: The operator 'aten::lerp.Scalar_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
if it was working then it's not the models fault. It's just a file after all. Seems like your system has some problem with the gpu and decided to use the cpu
Tutorial for this model Absolute Reality easy and fast
Not available anymore ;(
Absolute Reality is a really good photorealistic model. This one and cyberrealistic are the ones I have been currently going back and forth on.
It says updated but all the files say a month ago. This happened with another model recently. 🤔
