AbsoluteReality
That feeling after you wake up from a dream
Add a ❤️ to receive future updates. This took much time and effort, please be supportive 🫂
Do you like what I do? Consider supporting me on Patreon 🅿️ or feel free to buy me a coffee ☕
For LCM read the version description.
Additional ecamples for V1.6: https://civarchive.com/posts/353121
Additional examples for V1: https://civarchive.com/posts/259634
Amazing gallery by qf22: https://civarchive.com/posts/260939
Quick face alteration examples using celebrity names and mixing them: https://civarchive.com/posts/274268
Available on sinkin.ai, Mage.space and many other services
Suggestions
Use between 4.5 and 10 CFG Scale and between 25 and 30 Steps with DPM++ SDE Karras. Worse samplers might need more steps.
To reproduce my results you MIGHT have to change these settings:
Set "Do not make DPM++ SDE deterministic across different batch sizes." (mostly for v1 examples)
Set the ETA Noise Seed Delta (ENSD) to 31337
Set CLIP Skip to 2
DISABLE face restore. It's terrible, never use it
Use simple prompts. Complex prompts might make less realistic pictures because of CLIP bleeding. More complex prompts does not mean better results. Keep it simple.
Use ADetailer to enhance faces. Basically every solo portrait I made uses it. You can get my settings by clicking on "copy generation data". I suggest you use denoising under 0.3 to avoid getting always the same face.
Use BadDream and UnrealisticDream negative embeddings (
BadDream, (UnrealisticDream:1.2)). Add weight to UnrealisticDream between 1.2 and 1.5. Do not use FastNegative or EasyNegative if you aim at realism. However, they're good for artworks.Use Highres.fix with the following settings:
Denoising strength: 0.45, Hires steps: 20, Hires upscaler: 8x_NMKD-Superscale_150000_Gand as much upscale as you can (my gpu only handles up to x1.8 at 512x768 base resolution, but you can go higher). If you don't have8x_NMKD-Superscale_150000_Gyou can probably use another GAN, but it should be easy to find on Google. You can also try Latent with a denoise higher than 0.6, but the result will be harder to control.Try to condition faces by prompting for eye colors, hairstyles, hair color, ethnicity and so on. Even celebrity names do work. This model is pretty good at not making a single face if you play with the context.
If the pic is too clean, try to add some ISO noise. Even as a post processing with external tools it will trick the brain enough to make you think "damn, this is a real photo.
If you feel the grounded nature of this model is limiting your imagination, try generating on DS6 and then do img2img with this one to bump up the realism.
Pruned vs full comparison (not highres fixed)
Brief story of this model
While working on DreamShaper 6 I made many other models and a crap ton of tests. Some of them were strange mixes with photorealistic models I had previously made, plus new ones and some ISO noise LoRA I made. When I tested the various release candidates of DS6 for photography prompts, this came out on top. While mostly on the losing side with regards to range, flexibility and LoRA compatibility compared to what became DS6, I noticed this was pretty good at recreating photos with very simple and minimalistic prompts. So, why not, I gave it some love and kept working on it, just in time for its initial release.
Difference with DreamShaper
Long story short, DS aims at art, this aims at realism. They might overlap a bit, but they have different objectives and different things they're good at. DreamShaper is total freedom and can basically do everything at a high level. This does about one thing and does it extremely well. This still uses DreamShaper as base, so it's capable of doing art to a lesser degree.
Description
Not for training or inference. Just for outpainting and large scale inpainting.
FAQ
Comments (48)
it download json file for inpainting model
8x_NMKD-Superscale_150000_G Where can I find it
I uploaded a copy on my HuggingFace repo for Absolute Reality.
please answer my question, are these humans real and just costumes are generated, or the complete character is AI generated?
completely AI generated.
EVERYTHING on this site is completely AI-generated. We hate real pictures :)
@jasonrat504529 Yes, is true, every image you see here is generated by a program called "Stable Diffusion"... funny to find someone still wondering about... Anyway, welcome!
This model is great but it loves generating Asians and Asian-looking non-Asians. Can anyone recommend a realistic model that excels at generating white/Western women?
prompt for most example pics are "photo of a blonde woman" or similar. The modeldoesn't seem to be biased towards Asian people, I suggest you check your negative embeddings, because many of the ones I know introduce bias.
These are close to the default face of this model you should get with good unbiased settings:
https://civitai.com/images/1022393?period=AllTime&periodMode=published&sort=Newest&view=categories&modelVersionId=86437&modelId=81458&postId=270460
https://civitai.com/images/1017907?period=AllTime&periodMode=published&sort=Newest&view=categories&modelVersionId=86437&modelId=81458&postId=267350
Thanks for your comments. I only just noticed that negative embeddings can introduce bias. I have only minimally tested using BadDream and UnrealisticDream, but I'll experiment more. Thanks for this model!
For any models I generally find simply adding "asian" to negative prompts deals with this issue effectively, and I tend also to add a negative for "Emma Watson" around 0.6 or so. If necessary, bump the negative weight for asian to try and ram the point home, and put it towards the start of the negatives. You might also remove all your negatives and only put back the ones that are actually improving things, because it's never a done deal as to what will be beneficial in prompts, and what's pointless or even harmful. Playing with token blending or the steps where a token starts or stops being used may also lead you to more variety.
Has like no face variance whatsoever.
you sure you're not using a bad negative embedding? I posted various examples all with different faces and almost no prompt conditioning. If you're having trouble with your setup, try some prompt conditioning. If you have an overfitted embedding and you're not conditioning the result, the model will default to an average face, like every single SD model.
anyway try to use the suggested settings and/or post examples to let us verify your claim. Comments like this aren't helpful.
EasyNegative and NG_DeepNagetive_V1_75T tend to make everything look the same.
I am not using any embeddings. I even tried 0 negatives.
This model is amazing and miles better than all other realistic models. Not only can it give you realistic and detailed skin, but you don't need to pray for it to not add blemishes or make "beautiful" actually be beautiful. I've tried it with my own prompts and the prompts in the example for varying results. I'm also really happy a realistic model can do fantasy/scifi things, about that that was a thing. My only problem is getting it to no longer draw open mouths/teeth in inpainting.
Odd, almost all my examples have closed mouths. Try to prompt "closed mouth" if you're having issues, but it might be your setup/negative prompts/negative embeddings
@Lykon Ahh possibly. I've noticed different models react differently to certain prompts and it happened to me to other models as well, might be related...
PS: I meant to say "About time that was a thing," but I didn't really sleep last night XD
@Lycon you said "my gpu only handles up to x1.8 at 512x768 base resolution". Did you try in Settings > Optimizations to change the optimization to SDP - scaled dot product? on my GTX 1070 I could only do ~1300x700 (after hires fix) with xformers before getting cuda out of memory error but now I can go as much as I want with this optimization, and it's as fast as xformers and doesn't alter the generated images in any way (I did a lot of testing). Cheers!
doesn't matter because I got myself a 4090 a week after that comment, so I don't have that limit anymore. But thanks.
@Lykon that's nice! :) I have 3090 in my other PC and I could generate 4k images with SDP while I can't with xformers. So in my experience it's always better having SDP enabled. Here's a ~4k image I generated with SDP optimization. (960x540 with 3.7 hires fix)
@xperia256 which UI are you referring to? I'm using a1111 and I don't see that option in the settings area
ty so much!
this coment made me look for SDP, and realize is only for pytorch 2.0 and i was running 1.3, now my works gone from complete shit to ultra high perfect diamonds!
@mjugger I am using A1111 and I have the option, maybe it's only available for pytorch 2.0 like Robot_Dot said, try updating your SD installation.
@xperia256 yep, I definitely see now after updating pytorch to 2.0.1 thanks. also, are you using any command-line arguments in your webui.bat file? lastly, when you run your webui.bat file do you see "No module 'xformers'. Proceeding without it." in the output on the command-line?
@mjugger The only command lines I use are --xformers --autolaunch, I had to use --no-half-vae when I tested some old models. For the "No module 'xformers'" you can use --xformers line in the bat file and A1111 will auto install it and then you can use it in the optimizations menu, but I still prefer "SDP - scaled dot product" because it allows for higher resolutions and no cuda out of mem errors while being the exact same speed as xformers.
@xperia256 ok thanks. so I have a 3090 as well, but with SDP enabled and using the width and height settings you listed (with highres fix selected) it's choking. are there other settings you have enabled besides what's listed?
@mjugger Nope those are the only settings I use. By "using the width and height settings you listed" do you mean the 960x540 with 3.7 hires fix? of course it will be so slow, no optimization will make it fast even on a 4090 (will take around 4 minutes on a 4090 and about 7 mins on a 3090), it's a 4k image. I don't even recommend going for that resolution and this value of hires fix, it was just an example I gave above showing how SDP allows for high resolutions. Most 1.5 models won't be able to give you good results with this resolution anyway.
@xperia256 ah ok I see. thanks for the SDP tips though. I do see a good improvement in the resolutions I've been using.
Another crazy creation!
You mention using Adetailer, I'm just getting into it now and I have a question, do you upscale, then Adetailer inpaint? Or use Adetailer, then upscale in img2img
Adetailer inpaints it for you. It has face (and others) detection and runs after the image is generated.
Adetailer is automatic inpainting with AI detection of faces/hands/bodies depending on what you select.
I mostly use it just to make examples, as they make it easier for users to reproduce them (it's trackable, unlike manual inpainting).
However, when I make serious stuff I always use inpainting manually.
@Lykon I mean do you use it in the original t2i generation, or just the i2i upscaling, or both?
@Skittlz you can use it wherever you like. I use it only to generate in a single t2i step using highresfix + adetailer. Since you're already at i2i adetailer doesn't make sense (it's just inpaint with auto detection instead of manual masking, use inpainting directly)
By the way, is there any way to get a character to smirk? That particular prompt doesn't seem to yield any results. Smiling and such works, but it's not quite the same.
Fantastic model! I made a YT controlnet clip with the gothic horror woman prompt: https://youtu.be/eaUVUlvzu38
I tried to use the full model (i.e. not pruned) but switching to it seems to take forever... no message in the terminal however. Any idea why ?
drive speed?
It's an SSD so it can go up to 550Mo/s in reading
@Lykon I found out why, after my pc crashed during a generation, my GPU was disabled but since I also have a CPU that works as a GPU I didn't figured it. Your models are now working as they should ! Thanks for your work btw !
Loving this model! Just a question, what is the point of using the inpainting model over sticking with the regular one? When I'm inpainting I seem to get more consistent results with the regular model over the inpainting model.
from my experience in majority of cases normal model will be just fine with controlnet tile, however there are some cases when you need to alter a lot of details like redraw completly but without controlnet guidence, normal model will generate unrelated images in your mask area, however those cases are rare in my experience
Also outpainting, however controlnet inpaint will be better in theory?
Also it will be fun to test upscaling normal vs inpaint models, its in my todo list maybe will share results with community if it will be intresting
inpainting models have different input and generally work better at inpainting. Hard to explain, try both with different settings and see the difference for yourself ;)


