Natural Sin Final and last of epiCRealism
Since SDXL is right around the corner, let's say it is the final version for now since I put a lot effort into it and probably cannot do much more.
I tried to refine the understanding of the Prompts, Hands and of course the Realism.
Let's see what you guys can do with it.
Thanks to @drawaline for the in-depth review, so i'd like to give some advices to use this model.
[expand to see Advices]
Advices
use simple prompts
no need to use keywords like "masterpiece, best quality, 8k, intricate, high detail" or "(extremely detailed face), (extremely detailed hands), (extremely detailed hair)" since it doesn't produce appreciable change
use simple negatives or small negative embeddings. gives most realistic look (check samples to get an idea of negatives i used)
add "asian, chinese" to negative if you're looking for ethnicities other than Asian
Light, shadows, and details are excellent without extra keywords
If you're looking for a natural effect, avoid "cinematic"
avoid using "1girl" since it pushes things to render/anime style
to much description of the face will turn out bad mostly
for a more fantasy like output use 2M Karras Sampler
no extra noise-offset needed, but u can if you like to đ
How to use?
Prompt: simple explanation of the image (try first without extra keywords)
Negative: "cartoon, painting, illustration, (worst quality, low quality, normal quality:2)"
Steps: >20 (if image has errors or artefacts use higher Steps)
CFG Scale: 5 (higher config scale can lose realism, depends on prompt, sampler and Steps)
Sampler: Any Sampler (SDE, DPM-Sampler will result in more realism)
Size: 512x768 or 768x512
Hires upscaler: 4x_NMKD-Superscale-SP_178000_G (Denoising: 0.35, Upscale: 2x)
Useful Extensions
!After Detailer | ControlNet | Agent Scheduler | Ultimate SD Upscale
â No VAE needed but it is better to use one for more vibrant colors
â Feel free to leave Reviews and Samples - and always have fun creating â¤
Description
Natural Sin Final Photo Realism refined
same generation data as RC1 for comparsion
FAQ
Comments (185)
I am sad to see it go. Photogasm D&B is my current favorite model, so I am curious to see how this model compares. Thanks for everything!
Sorry, but why does everyone's hands look like Lana Kane's? Oo
What's the difference between NaturalSin from Sept and NaturalSinRC1_VAE? Thank you for the great models!
Itâs slightly finetuned. I did the samples with the same prompts so u can switch between versions of natural Sin and choose your favorite. At least I wanted epiCRealism a last Final model. For now Iâm done with model finetunes I guess. EpiCPhotoGasm already has 4 versions now and has to be the new goto model for realism imho. But there is a model for every taste probably.
@epinikion Thanks for the clarity! These are amazing models and I, too, like to use a variety. NaturalSin and Epicrealism and Epicphotogasm are my go to :D
Using Natural Sin, every face (and only the face) looks like aquarell painted.. any ideas?
This is the best model imo.
Did you leave a logo/ some text in one of your datasets mountainviews by any chance?
Thank you. What do you mean by that? đ
@epinikioni i found some remnants of a logo in 2 generated pics (both with a mountain view) using your model. thought that might be from a picture in your dataset.
@Joschek Nah probably from the base model? I donât have pure mountains in my dataset afaik
Thank you for all your work! I love version 5! However, I can't seem to get natural sin or the vae version to load. I get errors everytime. I can load other models but not the new two... Any suggestions??
Nope, which GUI do you use?
@epinikion I'm using automatic1111. I keep trying but it just puts me back on version 5. I'm going to install this on another PC and see if I can re-recreate the problem.
@DiscorDanian Yeah probably something is messed up with installation. It works here with automatic1111
@epinikion thank you for confirming this for me and thank you again for your work! I use ER5 almost exclusively. =)
UDPATE: So, look, this is a problem I have on my machine and others had this with other models and other Automatic1111 updates and I found a solution on Reddit from six months ago. Basically, this guy said that if you use the OpenOutpaint extension and then load the model from that tab it will work in the text2img tab. So for your edification @epinikion and for anyone else with this issue:
1. Go to Extensions tab.
2. Click "Available Tab"
3. Click "Load From:" button after ensuring repo is "https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui-extensions/master/index.json".
4. Search "OpenOutPaint" and install.
5. Hit "Apply and restart UI" -- if extensions are loading forever, go into "Settings" and hit "Reload UI".
6. Go to OpenOutpaint (I did not need to restart with --api) and then change the model. It doesn't throw the errors like in text2img and loads.
7. Go to "txt2img" and run. I confirmed working on my install! =)
Thank you for all your work and I hope my solution helps someone. Thank you to the community as well.
Thank you for all your hard work on this model. This is by far the best realistic model on the site and I dare say on stable diffusion. Please keep up the good work!
Great model, is it possible to get is as a .CKPT?
Nah, we donât like pickles. You can convert it if u need to
how install 4x_NMKD-Superscale-SP_178000_G ?
Put the file https://openmodeldb.info/models/4x-NMKD-Superscale into models/ESGRAN folder
Good model for waifus. But impossible to generate man.
See my posted pics, trick is to not use certain keywords for details related to women, see prompts on my image post to get an idea. Is a pain though if you don't know.
Spot on. Uninstalling this model. This is meant to be some blokes wank engine.
Nope. I never make waifus. Read my other comment. I just did an excellent old man. But I did use I2I, because it was a particular old man and a bad photo reference. And not enough other photos to train a good LoRA, I don't think. You are using 1girl, girl, lady, woman, breasts, boobs, tits, etc in your negative, right? I'm surprised how many people who don't do any anime-related stuff who don't know "1girl" is big keyword. And so many of these mixes are mixed with some model that mixed with something that goes back to NAI. Those buggers were rigorous, and used a standard.
Is it just me or the old one (RC1) is better than the new one?
I found the texture is better for the old but for consistent generation on the new one is slightly better.
I used to to all epic models (and many models on this site). Best results for training foto realism (sd1.5) EpicRealism V4. incredible. thank you
Is there an Inpainting model for this incredible final version?
You do not need inpainting models for inpainting, i always exclusively inpaint using non-inpaint models and never had any trouble with it, just make sure you use ControlNet with a good Depth map that clearly show the form of the subject (breasts, butt, belly, etc...), so the AI can properly understand it.
@SweetHentaiAIÂ no we actually need inpainting models. default models are very bad at inpainting real life pictures and they are only good for images that are ai generated by themselves
@SweetHentaiAIÂ Default models are bad at inpainting. It's been proven many times before. Inpainting models are more precise for the inpainting tasks.
This model and Photon are my go to models, Epic for Humans and Photon for backgrounds, currently using the Natural Sin RC1 VAE non-inpainting version for my INPAINTING (DIMM but currently debating Euler) works, check SweetAI (dA, Patreon or R34) if you need any 'proof'.
I doubt there will be a superior model honestly as this is as far 1.5 can go, everything else is higher resolution or re-working SD from the ground up.
it's a great model but i find it extremely difficult to generate anything other than women. even simple prompts will have a woman in them
I feel your struggle. With this model and most female biased ones you can get around it if you use a negative like:
(Long hair, 1girl, Woman, feminine, breasts, girl, lady, female:1.4),
Or, if u haven't seen it yet, VirileReality's new beta is đĽđĽđĽđĽ
wrong, i can generate everything with it, but is super realistic to generate peoples
does this not work with controlnet? getting an error when trying to use this with controlnet inside comfyui
out of memory i guess, try smaller resolution perhaps
I tried using this with ComfyUI and I got a slew of Python errors. I don't know if this is because I'm using SDXL or if it a configuration thing on my side. I'm quite new to Stable Diffusion.
Error occurred when executing KSamplerAdvanced: mat1 and mat2 shapes cannot be multiplied (2464x2048 and 768x320) File "P:\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "P:\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "P:\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "P:\ComfyUI\ComfyUI\nodes.py", line 1270, in sample return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise) File "P:\ComfyUI\ComfyUI\nodes.py", line 1206, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "P:\ComfyUI\ComfyUI\comfy\sample.py", line 93, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "P:\ComfyUI\ComfyUI\comfy\samplers.py", line 742, in sample samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar) File "P:\ComfyUI\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "P:\ComfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat s_in, *extra_args) File "P:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in callimpl return forward_call(*args, **kwargs) File "P:\ComfyUI\ComfyUI\comfy\samplers.py", line 323, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed) File "P:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in callimpl return forward_call(*args, **kwargs) File "P:\ComfyUI\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward eps = self.get_eps(input c_in, self.sigma_to_t(sigma), *kwargs) File "P:\ComfyUI\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps return self.inner_model.apply_model(*args, kwargs) File "P:\ComfyUI\ComfyUI\comfy\samplers.py", line 311, in apply_model out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed) File "P:\ComfyUI\ComfyUI\comfy\samplers.py", line 289, in sampling_function cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options) File "P:\ComfyUI\ComfyUI\comfy\samplers.py", line 265, in calc_cond_uncond_batch output = model_function(input_x, timestep_, c).chunk(batch_chunks) File "P:\ComfyUI\ComfyUI\comfy\model_base.py", line 63, in apply_model return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float() File "P:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in callimpl return forward_call(*args, **kwargs) File "P:\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 627, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options) File "P:\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 56, in forward_timestep_embed x = layer(x, context, transformer_options) File "P:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in callimpl return forward_call(*args, **kwargs) File "P:\ComfyUI\ComfyUI\comfy\ldm\modules\attention.py", line 693, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "P:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in callimpl return forward_call(*args, **kwargs) File "P:\ComfyUI\ComfyUI\comfy\ldm\modules\attention.py", line 525, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) File "P:\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 123, in checkpoint return func(*inputs) File "P:\ComfyUI\ComfyUI\comfy\ldm\modules\attention.py", line 625, in forward n = self.attn2(n, context=contextattn2, value=value_attn2) File "P:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in callimpl return forward_call(*args, **kwargs) File "P:\ComfyUI\ComfyUI\comfy\ldm\modules\attention.py", line 420, in forward k = self.to_k(context) File "P:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in callimpl return forward_call(*args, **kwargs) File "P:\ComfyUI\ComfyUI\comfy\ops.py", line 18, in forward return torch.nn.functional.linear(input, self.weight, self.bias)
That is a 1.5 Model u canât use it with SDXL
@epinikion Thanks, I didn't realize.
What are the differences between the versions of this model? Are certain versions better or worse for various styles and subjects?
When using the epiCRealism positive embedding with version PureEvolution V5 and the NaturalSin checkpoints, prompting for "dress" results always in a bodysuit/one piece. When using version Photogasm X, using epiCRealism positive embedding and prompting "dress" works fine. So the epiCRealism embedding should NOT be used on the earlier versions of the checkpoints?
It's a good model, but it seems to generate images with people looking at the camera less often than other models. Even adding "looking at camera" to the prompt doesn't solve this. By any chance, do you have plans to release a LoRA to solve this?
Nope, try epiCPhotoGasm instead
How do you achieve face diversity? I feel like Natural Sin RC1 VAE model always tries to generate the same girl over and over. First iterations start with diverse faces (while it's all blurry) but later ones converge to the same face. I can alter the major features like hair and eye color with the prompt, but it still results in the same face type.
Doesn't that tell you something? I rarely do girls, but I get awesome results for some things at 17 steps. I find the more steps, the more detail I lose. A detailed, interesting roundish planetoid, with enough steps, becomes a polished steel ball. And I rarely use just SDE. I use DPM++ 3M SDE Karras or Restart mostly. Sometimes exponential for an in-between look. I used DDIM for the longest time because it generates the most details and is most creative. Too much so, in a lot of cases. DPM++ 3M SDE can work for some of my more chaotic LoRA, but it's also the first sampler to turn something else into a stock girl face.
Pardon my confusion, but what is the difference between the the VAE and the Normal model?
The VAE Version has the VAE baked in
@epinikion Thank you :)
a perfect realistic model :D.the only problem is that i generate same face always,is there any way to faces more diverse?
Try epiCPhotoGasm
@epinikion thanks,but i figure out problem is with Adetailer =_=,
also some loras or embeddings tend to reproduce the same faces. try to reduce their strength, or give some additional details. instead of "a woman standing on street" add details like "a woman standing on street, {job} {nationality} {name1 name2 name3} by {name of artist}" or add facial details "a woman standing on street, snub nose, etc.", this could give different results. you can also have a look on dynamic prompting with wildcards, this automatizes the process of adding random stuff to your prompt. if using comfyui you can do even more advanced stuff by using different sampling steps and inject detailed prompts inbetween
@OneBullet thank u for suggestions! i'll try them
@OneBullet Any particular wildcard loras/ to try?
@Omikonz  in a1111 i have "dynamic prompts" installed and here on civitai you can search for "wildcards", i have "billions of characters" downloaded (yaml files that go into the extensions folder). and for loras, i havent found a good one that gives a good variety of different faces. imho the best and easiest way to achieve a variety in faces is, to use an ipadapter model. i dont know how to use it in a1111, but in comfyui its really easy. between the checkpoint loader and the sampler you can insert an ipadapter node where you can load a face. this "image prompt" gets then mixed with your text prompt and gives you an image, where the face has similar features to the ip image. you can also batch multiple images together and this "mix of faces" gives you an unique new face, or you can play with the strength of the adapter to adjust the influence of the image prompt. there are also other ways to change the image, i.e. use two advanced samplers in comfyui, the first renders the base image in i.e. 8 steps, and then you inject a new prompt that specifies i.e. the face and the second sampler continues with the injected prompt from step 8 on. then you can change the injected prompt to generate different faces, (add ethnicity, describe facial forms, add names etc) and keep the basic composition of the image. there are maybe other methods, but this are the easiest ones for me.
The faces all look similar with this model but the background is very rich. If you're using the Reactor addon to swap in different faces, this would be a good model to use it on.
why same faces en every generation?
you can probably get around by specifying the nationality of your character.
It is acturally better than epiCPhotogasm
Love your models, the quality is amazing to work on and improve real low quality images. However what I cant figure out is how to finetune an Inpaint model. I know about depth maps, but how to prep the data is nowhere explained. At least I have not found a source that goes into that details. Would love if you could share with a simple example how that works. Thanks.
You just merge a finetuned model with the base inpainting model https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/How-to-make-your-own-Inpainting-model/49d623190acce27d402f56c79a0be06b754f9ce7
@epinikion Ok, damn I did not expect it to be as simple. Still wondering how it actually works, but thanks man gonna try this out.
@epinikion The inpainting models that you have uploaded are merged with SD 1.5 models?
@azirizvi999375Â Its not just a normal merge, its adds the differences to the base Inpaint model. I assume adding the weights to this model it translates also to the depth maps, that is the only explanation i can come up with and it kinda makes sense.
And it seems all the custom inpaint models are made like this, because in months I could not find a detailed explanation how this could be done by training. The only reliable Info I found was a paper that explained the model contains depth maps, basically comparable to Control Net depth model, which does work in a similar way.
@azirizvi999375 Yes, thatâs why I tend to say everyone can do it themselves đ¤
I love your models but I always have trouble generating 1:1 copies of any image on this site, would you mind sharing what settings you're using in your SD? What arguments are you launching the Web UI with and if you're using ENSD? I want to be able to create 1:1 copies of your images.
Thankyou!!
Wow these model-mixers just can't win. Low-effort prompters complain about the same type of face (while actual young girls IRL are ignored because they don't have that same type of face). And other people want to reproduce example images 1:1. I really don't get this, other than in the few cases where someone is posting images with exceptional detail and content and some people are calling foul. I duplicated one, once, to settle a bet. But you do know how easy it is to miss an embedding? You do know sometimes you can't just read the image gen params in Civitai with your eyes? You have to do the copy button and paste it somewhere. Then you often see way more stuff. Also, I know guys who just got a new graphics card and can't duplicate their own images. Also, use xformers? Or anything like that? Most optimizing algorithms are somewhat 'non-deterministic'. Can we make some kind of sticky for this?
@parallelepipedon, you could've written all of this without being so pretentious, condescending and coming off as high and mighty, I'm very well aware of the elements that change an image and I had trouble replicating his, hence why I asked the creator if he was launching his UI with different web arguments or he was using a specific ENSD, even having arguments when launching the web UI such as medvram influence the result greatly, your comment is unwarranted but you if wanna comment more to satisfy your ego you go right ahead.
@azirizvi999375Â i've nothing special set up, i posted my config.json here https://rentry.org/a1111_epi_conf and my startup parameters from webui-user.bat are:
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--theme dark --xformers --no-half-vae
call git pull
call webui.bat
@parallelepipedon from an engineer/scientist standpoint this question makes sense. Stable Diffusion absolutely can be deterministic, and if a user has come to this site to explore the computer science behind AI, this is not only a valid approach, but a prudent one.
I can understand aversion to the question from an artistic standpoint, but I think the inquiry was issued by someone looking to validate their installation and results.
@parallelepipedon awful comment
I tend to use the photogasm but I do like the real skin feel of RC1 VAE. It gets burned if I use controlnet so have to play with settings. Would you consider "natural sin" final to be more complete that natural sin rc1 (even with vae, which I assume is all that added to the first rc1)?
[back pat] I just did an upscale-off for some landscape gens and the winner in adding appropriate detail was: epic natural sin! I went up to 0.4 denoising with very little checkerboarding.
And I'm often the first to complain about 1girlitis, but I have the usual 1girl, girl, lady, female, etc in my negative. And I'm usually not doing people. But I just did a very old man and admittedly used I2I as it was the older common profile photo of M. C. Escher. I got the BEST results with this model. For those who haven't tried, even vaguely photorealistic old men are THE most difficult subject. Particularly working from a very poor quality old photograph.
Some others were so girl focused they'd make a girl even with ControlNet. I did the younger drawing of Escher too and some models would put a beard on a girl. This one worked great and was one of the few models who listened to my prompt and tried to make a color photo from the B&W ControlNet image.
Out of my collection of over 200 models, this one has now ascended into the Top 5, will be my very top pick for many things. And to think I initially turned my nose up at "sin" in the name, fearing it too girly.
Difference between this model and epiCPhotoGasm?
This is better at easy prompt
How does this model work for a wide-angle image? I found these prompts don't work.
I would also like to make a realistic model, and I'm currently trying through Dreambooth spending a lot of time on it, but I can't figure out the settings, the results are disgusting. You can tell me what settings you use, what resolution of images and according to what scheme the training takes place. On the Internet, I could not find lessons on teaching such large-scale models with a package of images of several thousand. I trained 1900 portrait images of 512x512 at 380000 steps tonight (Learning Rate 0.000001), as a result, some kind of crooked nonsense turned out. Save me, brother.
I wonât share but u can have a good starting point on the documented trainings of FollowFox Ai here https://followfoxai.substack.com/p/full-fine-tuning-guide-for-sd15-general
this model is fantastic. However, i've noticed it seems to be resistant to lora's. am i doing something wrong or is this a known thing?
would using this model as the source model to train the lora help?
I can't get any of my loras that work with other models either. Really wish I knew the issue.
Heey! Great work! Bet you have experience my team needs for our project. How about we collaborate?
If you're interested, could you share your email or Telegram to discuss this?
What's the difference between Natural Sin RC1 with the vae baked in and Natural Sin? What does RC1 mean?
RC is "release candidate", so sounds like a sort of work-in-progress but good enough to get out until the next version
@jspapp Ooooooh... Duh...thanks, lol.
epiCRealism is by far the best model for SD 1.5 .
My question is, are you going to make a epiCRealism model for XL?
Someone once referred to EpicRealism as a 'wonderful piece of Technological advancement'. I thought this was a GROSS OVER-EXAGGERATION, until I used it...
Doesn't work for me get weird botched faces and bodies. using vae-ft-mse ema pruned tried different amount of steps https://imgur.com/a/Ds6RQxM
edit* it works with a different sampler!
Try a different sampler
use different Sampler or higher steps
@epinikion @mailzadev Thank you this works, and beautiful faces now. I'm still pretty new to this and this is the first model for which I have had to change the sampler. I'm happy to do so because the result is very nice but is there a reason why the default doesn't work for me?
@thakakarot Please share which combination of sampler, steps, CFG and scheduler you used. I am facing the same issues.
I had the same issue when trying to remix image 2384151 (NSFW) and making any subtle changes and also adding the Add Detail lora back in. (If you leave things the same, the image generator just shows the cached images.) The images would have artifacts that looked like bright rainbow metal on parts of the body.
I made the following changes:
- increase steps from 20 to 40, which removed most artifacts but left some weird color artifact in a small part of clothes
- change the sampler from DPM++ 2M Karras (what I typically use for most generations) to Euler a, which removed remaining artifact
I had also tested reducing CFG from 6.5 to 5.0 as recommended in the model . That alone removed artifacts, but not as much as increasing the steps.
After doing some more testing, I found that increasing steps to 50 makes artifacts go away. I can keep the sampling model and CFG of the original image. The model page does recommend CFG 5, so it may be good to use that value anyway.
How to use this model?
How can I use this model in fooocus?
You can't. Fooocus only supports SDXL models, this model is based on SD 1.5.
OMG... I can't deactivate the nipples... nipples everywhere.. this modern doesn't understand what NSFW means
nipples everywheređ¤Ł
(((NSFW)))
I've been trying to train loras on this model, it would help a lot if you shared what kind of captioning you used in your training process. Are you using danbooru type tags or blip, git, or your own?
blip first some fintetunes afterwards
I'm not sure why but any time I try to load NS or NS RC1 VAE, it fails to load the checkpoint and reverts to a different model. Does anyone know why this is?
Hi there, I was hoping to get some clarification for a sort of "issue" I am having. I am running a simple prompt for the model, pretty much girl standing on a field of flowers, dark sky, night_sky... But the issue lies in the fact that I am getting the same face every single generation time.
With other models I tend to get different faces, I am not sure if I am using the model incorrectly (I did use it with 840000 pruned and now without a VAE) or is it something in the prompt that might be pushing it to create the same image?
If needed I can share the prompt ^^, thanks in advance
Try different prompts with races like asian, caucasian or white and mix famous people's names even if they are male to try to get a different face, also use adetailer
How can I use epicRealism as a base model SDXL 1.0 and not a Refiner 1.5 in fooocus, and if i can't, what is the best base model to use with epicRealism, (i am using juggernautXl)
Go into advanced settings, look at them :D then find the one that states something like "dev mode, force model swap" set that to 1 so at 1 sample it'll swap models to whatever you have as the refiner
What does the 1 file above do and do I just put it in the same folder as the model? The file above is "Pruned Model fp16 (1.99 GB)"
What is the major difference between Natural Sin and Pure evolution?
When adding suggested negative prompt (normal quality:2) it always outputs the same girl
I love this checkpoint, but it is very hard not to get that same girl in EVERY shot. I wish someone knew how to stop that.
lora, ipadapter
Don't use textual inversion embeddings, such as EasyNegative. While helpful, these are the #1 cause of sameface problem. If you still don't get enough variety, try adding "Nigerian/Irish/Portugese/other country" to the prompt to get a specific type of face, changing the country every few gens.
The only known way to stop that is to merge the block weights of a model that produces different girls, like DreamlikePhotoReal or LiberteRedmond, though that comes with problems of their own, or you'd be using those models instead.
I use Reactor to produce multiple same faces looking to different angels and then trainer a Lora with the model.
use a lora that makes different people I can't remember the name but its right here on civit
@DobleMMÂ RealHumans, you mean?
This is honestly by far the best model I have ever seen for realism, especially for non-portrait framing. I am constantly blown away at the quality and diversity of the images it can produce. Great work!
Awesome, gives so realistic results better almost all the other models I use, specifically on the portrait skin type. I just love it.
Natural Sin RC1, or EpicRealisim v5? What's better?
Can I use this same RC1 model for inpainting as well?? Don't see a separate inpainting model.
I've been using this for a long time and love it.
Lately, and i'm not sure why, if I use a basic prompt like. ( 25yo female doctor) I always get a nude woman. Even if using NSFW, NUDE, NAKED, in my neg prompt.. any idea's. I'm trying to use ipadapter with face, and a simple prompt to make the same character in different professions. would love some idea's on how to stop getting nude women.
I was dealing with this all day today even when using embedding:epiCPhoto-neg the negative prompt I ended up using that worked most effective is (nude:1.5), (NSFW:1.5), naked, breasts out, nipples thats the only negative prompts I use with this model and I get good results now
I've been using some other models @astr010. I guess epic is geared towards nudity. Really liking Ican'tbelieve_newyear... I've also started using LCM models and have been getting great results.
@bigmax8999Â Bro thank you so much for the tip I am taking a look at these now!
@astr010Â @bigmax8999Â you know epiCPhotoGasm? Try version Last Unicorn or experiment wit amateurreallife
@epinikion ok will do. I was experimenting with the same prompt across multiple models and epiCRealism has consistently left the cloths out even when explicitly detailed in the prompt. I will test with epiCPhotoGasm as well
Great quality. Among the real models, she has an overwhelmingly high level of realism. With other realistic models, when it comes to subjects other than those that they are good at, the realism is extremely low and the pictures often end up looking cheap. However, this model maintains a very high level of realism no matter the subject matter. Very durable.
Photorealistic quality was there but I was somewhat disappointed by the female faces in terms of their attractiveness.
The inpainting is really useful for generate square photos for data training.
One of my favourite models for photorealism! Only downside is the strong face bias imho
With hundreds of models that I tried, i can say this is the best realistic model. The image quality for a realistic person is very good without using any LORAs or embeddings. I wish you can continue to update this model.
good at fiction not just photos, wish more models did this. but struggles in landscape like most realisim.
Why isnât pure evolution workingv
isnt for me eithe
Tech people just love it when people complain something's not working. Is the developer supposed to guess how it's not working? It could be anything from actually throwing up an error because the file got corrupted during download to it doesn't work with your prompt the way you think it should.
Did you try unplugging your computer and plugging it back in and see if that fixed it?
Why does this generate the same woman's face in every generation? I used a simple prompt (a beautiful 34 year old woman) and the same face everytime. What causes this?
it's that way for all the checkpoints. to change things up, put the name of some celebrity, but inside square brackets so it's not too much. it should radically change the face type, if it is a face from the old days, put 'monochrome' and 'black and white' in the negative prompts. that ought to get you started.
@fablegenius Thanks for the tip!
The best solution is to use wildcards. Firstly, use chat gpt or something similar to generate names of different countries and ethnicities, and then use those as wildcards to randomize your future SD generations.
example prompt: a beautiful 34 year old __ethnicity__ woman
@ImAbbieKitten I've been using European, Swedish, Danish, etc, but even then I think it will use the same face for each country/region. It's a shame there isn't a keyword in SD that randomizes the face specifically. We all know for a fact a lot of these models have learned hundreds of different faces, it's just very strange it prefers to only use 1 while everything else gets randomized. I guess it's just another fluke like the bad hand/feet anatomy.
@EROS1CAÂ Another method you can use is to use different names such as Emma Muller, Amber Grey, Priyanka Singh, Xia Lee.
Names like these will surely impact the generation. Try it...
@EROS1CAÂ I don't know why but it's true that most faces tend to be the same, almost regardless of which checkpoint. but if it's face types you want, i stand by my recommendation over other alternatives such as race, country, etc. it can be a long process to find what you're looking for that way, but you can try some tests with the s/r function in the XYZ grid if you're using a1111 for example. load up a bunch of celeb names and tame them with the square brackets so the standard faces get blended with the specific faces. but sometimes SD might not "know" the celeb you're naming so it might not react in those cases. trial and error.
@fablegenius I did unknowingly combine 2 celebrities before I started noticing this issue. You're right it seems to do a great job at creating unique faces.
Check your negative prompt for negative embeddings. Negative embeddings (i.e. EasyNegative) is a fast way to start using checkpoints, but they are likely introducing their own biases to various features like faces.
@EROS1CAÂ that's good to hear. it seems to me that adding a little essence of various celebrities is easier than messing around with LORAs in most cases. Don't forget about Adetailer as well, it's a must-use feature.
Using a celebrity's name works the best for me to get a different face. You could do different names too which should change it as well.
Epic Realism stays the best ckpt for me, so thank you very much. (I use Deforum). I will try the SDXL version. But I've got a question : I've got bad artefacts on the faces since 10 or 15 days, without changing my parameters. Everything is homogenous in my image, except these faces. Thank you if someone has an advice.
I have a lot of missing arms and stuff, otherwise looking very good. any ideas?
on of the best model i can say, i used and tested it with many prompts and the results are damn good.
It produces the same face (all the same face in the official and user examples) with very slight variations. Doesnt matter if you put redhead, blonde, black hair, etc. Even with img2img and other things like controlnet it still tries to force that same face. Almost like a horror movie. I assume that some authors like certain faces so they make it overwrite other faces more.
is there any prompt to get this to make a somewhat realistic penis? Its amazin at drawing guys but boy oh boy does it love to make everyone a trans man lol
no, these types of models have not been trained for this, you need to use LORA
@epicPhoto it sure knows how to make a vagina tho.
Will there be an inpainting version of natural sin released
Since this is an easy merge I could do it tomorrow but everyone can merge on their own an inpainting model. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/How-to-make-your-own-Inpainting-model
@epinikion and for the lazy folks? :))))))
Hi! I really like the model, but it creates almost twice as large a nose as the person in the photo reference when using an IP adapter. Does anyone have any advice on how to fix this? Sorry for my English
with those couple sentences, your English is clearly and sadly better than most native English speakers đ
Try Pinocchio as negative prompt đ¤
@epinikion Unfortunately, the noses are getting wider, not longer. But if it would be useful to anyone, I found that TDLX_Nosesize LORA helps to get control over this parameter
I've been looking around for a long-time for the best base model, and I've gotta say this is the best one that I've found so far, as it's so real. These look like actually real women. The other thing is too, the clothes look realistic too, as I've struggled not getting them to look fake or blurry. I don't inpaint though because it adds blur to them. The only thing that I'm struggling with actually getting the prompts to change the colors of the clothes. I'll have to figure that out.
Does it work on ComfyUI
yes - it's just a checkpoint - it will work with any SD1.5 workflow you already have working in comfy.
One of the best
Is there exactly the same SDXL model?
Thanks for this!
En Civitai hay miles de modelos,pero este los supera a todos,es genial,no errores,no imagenes deformadas,responde perfectamente a los prompts,es genial,teniendo este no se necesita otro modelo universal.
This is the best stable diffusion model I have found for generating realistic images. Also works great with different ethnicities of people.
Best of the best :D love u dude!
Will there ever be a new version of this?
How many pictures in your training set? and are you captioning them by hand?
Thanks for this,it help me a lot
Why it's the only really good one?
Hola,esto es fantastico,con este modelo he conseguido las mejores imagenes. Tengo una pregunta: saben si existe algun checkpoint con el que pueda generar images con los personajes de Fallout 4?
Will you please make epic realism flux model :)
Great job bro!
Of course, this is a very powerful model, one of the best of its kind, with excellent responsiveness and flexibility and a wonderful photorealistic look.Except that I can mention some female NSFW focus (sometimes it bares women despite the prompt, and also makes men women). A great model!
Biased, cannot draw men.
But this does not mean that model is bad.
What prompts do I add if I want to control the lens Angle
Any Natuarl Sin Inpainting version coming?
How do I get it to produce a different face?
Details
Files
epicrealism_naturalSin.safetensors
Mirrors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_natural_sin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors
EpicRealism.safetensors
epicrealism_naturalSin.safetensors
epicrealism_naturalSin.safetensors


















