Natural Sin Final and last of epiCRealism
Since SDXL is right around the corner, let's say it is the final version for now since I put a lot effort into it and probably cannot do much more.
I tried to refine the understanding of the Prompts, Hands and of course the Realism.
Let's see what you guys can do with it.
Thanks to @drawaline for the in-depth review, so i'd like to give some advices to use this model.
[expand to see Advices]
Advices
use simple prompts
no need to use keywords like "masterpiece, best quality, 8k, intricate, high detail" or "(extremely detailed face), (extremely detailed hands), (extremely detailed hair)" since it doesn't produce appreciable change
use simple negatives or small negative embeddings. gives most realistic look (check samples to get an idea of negatives i used)
add "asian, chinese" to negative if you're looking for ethnicities other than Asian
Light, shadows, and details are excellent without extra keywords
If you're looking for a natural effect, avoid "cinematic"
avoid using "1girl" since it pushes things to render/anime style
to much description of the face will turn out bad mostly
for a more fantasy like output use 2M Karras Sampler
no extra noise-offset needed, but u can if you like to š
How to use?
Prompt: simple explanation of the image (try first without extra keywords)
Negative: "cartoon, painting, illustration, (worst quality, low quality, normal quality:2)"
Steps: >20 (if image has errors or artefacts use higher Steps)
CFG Scale: 5 (higher config scale can lose realism, depends on prompt, sampler and Steps)
Sampler: Any Sampler (SDE, DPM-Sampler will result in more realism)
Size: 512x768 or 768x512
Hires upscaler: 4x_NMKD-Superscale-SP_178000_G (Denoising: 0.35, Upscale: 2x)
Useful Extensions
!After Detailer | ControlNet | Agent Scheduler | Ultimate SD Upscale
ā No VAE needed but it is better to use one for more vibrant colors
ā Feel free to leave Reviews and Samples - and always have fun creating ā¤
Description
Lets say we reached the final Version, tried to improve male understanding, hands and of course photorealism
FAQ
Comments (53)
Could you make a new inpainting model for V3? Your models are some of the best for realism, absolutely love them.
Sure i will do. Pls be patient a bit
Got you covered! It's there :)
@epinikionĀ You're the goat man
Incredible model! I feel like this is the closest to realism I've seen in a model. Nice job!
Thank You! ā¤
Definitely the best realism model I've seen so far. It seems like SD 1.5 is approaching it's maximum potential here
Great Work and thank you for sharing! Wich female Character Embedding (LoRa or Textual) is recommended fpr this Model?
Can you say that this model is going in a way of better hands and fingers wrapping? And more logic in details?
I think that in order to get even closer to realism, we should not strive 100% for the ideal. Realism is given by shortcomings, skin defects, small disproportions, etc... but these are just my thoughts. ;)
Anyone got mirrors for 4x_NMKD-Superscale-SP_178000_G? The one linked is public transfer limited exceeded
The lighting on V3 is so satisfying. This is my favorite model for realism. Thanks for sharing it along with the embeddings.
Thank you, glad you like it āŗļø
There is something weird happening in combination with controlnet: it fails for the "reference" control type. Even more so with "ControlNet is more important". The output becomes almost completely black. It happens only with this model (and mixes using this model). I wonder what is happening.
Above comment is for V3. I just tried V1, that also has this problem of images getting darker with controlnet "reference" but not as bad. The image does not turn completely black, just very dark.
@unickĀ I donāt have this happen to me. Probably there is something messed up with your installation?
Thank you for your work!
Anyone got tips how much Unet training steps per instance image to train with his model? Using fast dreambooth to train my data set, and usually i use 100 steps/instance image but somehow the face wont use my instance image. sry for the bad english.
Is kkw-ph1 a lora?
Yes it is linked in suggested ressources āeasy photorealismā
kkw-ph1.bin is a TI
Right, it is a TI š
This is Godzilla of rendering models,nothing can even come close to it for now,it's so realistic that's actually scary but like the author said keep negative at basic.
Best way is to just use author negative prompt,because model is so amazing that would never render bad images.
Main thing is CFG scale needs to be left at 5 like author said because difference between numbers is massive,just switch it to 5 and forget about it.
Amazing job!
Haha thank you! ā¤ļø
I've been having good results below CFG 5. With 1 colors blend together nicely if thats the vibe your going for. 2-4 is sweet spot for me honestly
Whats the best sampler for this model?
@Boxer766Ā I quite like using DPM++ 2M SDE Karras but i havent tested the others too much.
@Boxer766Ā DPM++2M SDE Karras working the best for photorealism.
@epinikionĀ No thank you!,i just forget about other models because this beast can do anything: beautiful nudes,gothic fantasy,SF,basically whatever you want with right prompts.And i noticed model is super sensitive on prompts so you can just name the words with weights and model would just explode,never seen such quality or precision in any model.
You are genius and keep up with good work!
@epinikion I have accumulated 160k images from unsplash and 1 million images from 1x.com. I want to train a hyper realistic model based on sdxl, can you please collaborate with me? lets make this big, I have an investor who can help us with the computing power.
we can make this big
why are my images coming out blurry and fuzzy? i see other peoples images and they look amazing and real?
In order to lesson confusion imma go name my "EPIC MIX" with realism to something less confusion LOL <3 also your model is amazing, never stop creating <3 thank you for making amazing stuff
Why i get
AttributeError: 'AttentionBlock' object has no attribute 'to_q'when trying to train Lora in Kohya Collab?
I can't get this google collab to dreambooth train from this model: https://colab.research.google.com/github/sagiodev/stablediffusion_webui/blob/master/DreamBooth_Stable_Diffusion_SDA.ipynb#scrollTo=XU7NuMAA2drw
On huggingface, I don't see any official model there from you either, and none of the ones available seem to work. Their file structure is wrong, and they don't have any fp16.
Do you have any advice of how I could dreambooth train a model based on this? It's my favorite realism model by far!
Just right click download button and "copy link" my brother
I use Easy Diffusion and I'm getting this error:
Error: Could not load the stable-diffusion model! Reason: Error(s) in loading state_dict for LatentDiffusion: size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape torch.Size([320, 9, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 4, 3, 3]).
Anyone knows hot to fix it?
I just have to ask, how are you able to train such high quality photorealistic images off of SD 1.5??? Did you use any trigger words, or...
Sometimes while loading this version of Epic it will crash automatic 1111. Works fine the next time and the next few times then it will crash it again, anyone having this? im only having it with this model.
Did you plan a new pix2pix version?
Not really, you can merge it yourself like inpainting model merge. Since ControlNet nobody really uses pix2pix anymore I guess.
How do I deal with the color cast on this model ?
Hello, how can i contact you?
Details
Files
epicrealism_pureEvolutionV3.safetensors
Mirrors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_v3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
EpicREV3.safetensors
epiCRealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicpureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicrealism_pureEvolutionV3.safetensors
epicpureEvolutionV3.safetensors















