Check my exclusive models on Mage: ParagonXL / NovaXL / NovaXL Lightning / NovaXL V2 / NovaXL Pony / NovaXL Pony Lightning / RealDreamXL / RealDreamXL Lightning
Recommendations for using the Hyper model:
Sampler = DPM SDE++ Karras or another / 4-6+ steps
CFG Scale = 1.5-2.0 (the lower the value, the more mutations, but the less contrast)
I also recommend using ADetailer for generation (some examples were generated with ADetailer, this will be noted in the image comments).
This model is available on Mage.Space (main sponsor).
You can also support me directly on Boosty.
Realistic Vision V6.0 (B2 - Full Re-train) Status (Updated: Apr. 4, 2024):
- Training Images: +3400 (B1: 3000)
- Training Steps: +724k (B1: 664k)
- Approximate percentage of completion: ~30%All models, including Realistic Vision (VAE / noVAE) are also on Hugging Face
ᅠ
Please read this! How to remove strong contrast.
To make the image less contrasty you can use LoRA [Detail Tweaker LoRA] in a negative value.
ᅠ
Orange Color = Optional
ᅠ
I use this template to get good generation results:
ᅠ
Prompt:
RAW photo, subject, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
ᅠ
Negative Prompt:
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, UnrealisticDream
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation, UnrealisticDream
ᅠ
Euler A or DPM++ SDE Karras
CFG Scale 3,5 - 7
Hires. fix with 4x-UltraSharp upscaler
Denoising strength 0.25-0.45
Upscale by 1.1-2.0
Clip Skip 1-2
ENSD 31337
ᅠ
Thanks to the creators of these models for their work. Without them it would not have been possible to create this model.
HassanBlend 1.5.1.2 by sdhassan
Uber Realistic Porn Merge (URPM) by saftle
Protogen x3.4 (Photorealism) + Protogen x5.3 (Photorealism) by darkstorm2150
Art & Eros (aEros) + RealEldenApocalypse by aine_captain
Dreamlike Photoreal 2.0 by sviasem
HASDX by bestjammer
Analog Diffusion by wavymulder
Life Like Diffusion by lutherjonna409
Analog Madness by CornmeisterNL
ICBINP - "I Can't Believe It's Not Photography" by residentchiefnz
Description
vae-ft-mse-840000-ema-pruned.ckpt included
FAQ
Comments (61)
can you add diffusers weights? conversion doesn't seem to work
How to use in SD? I notice all model extension is .ckpt, why this is .safetensor, how should i use it?
Hi. A model with an extension .safetensors is used in the same way as the model with the .ckpt extension.
Safe-and-Stable-Ckpt2Safetensors-GUI.v0.1.0
.ckpt can have malicious code whereas .safetensores can't. It's more secure.
Is ANALOG STYLE the best trigger word for architecture, or does it not matter?
Hi. It's individual. You may like the result using Analog Style, or you may like it without using Analog Style. I didn't notice a particularly big difference.
Be careful that this is a VERY highly NSFW model, like anything will be nude people even with the 1st words being "nsfw, nude" in the negative prompt...
A warning for my straight homies out there: If you get hard at AI generated nudes, you're turning into a robot. Turn off your pc, drink some water to prevent the change
Hello, is it working well on InvokeAI ? I'm getting images with nothing in common from my prompt (like a landscape for prompt in the examples)
(I'm new, but my other models works well)
Same on Stable Difussion GUI, i dont know what happened.
Hi. I'm sorry for the inconvenience. It was a problem in the model, I have almost finished fixing it and will upload the corrected model soon.
I'm having the same issues here
Would be great if you could include a note about what you think is different/better in 1.3, or reasons for updating? Thanks friend
Hi, this is going to sound silly, but the new version has improved photorealism. I have a little comparison here
@SG_161222 Thank you for adding those direct compares. They show 1.2 feels less gritty and some of them give a more pleasing look especially the folds of material the t-shirt and head covering, but 1.3 maybe the skin details are better...makes it a hard choice :)
I'm still in favor of 1.2 — 1.3 feels off to me
It is not working with NMKD GUI. .ckpt gives error, safetensor conversion generates random images.
Hi, what version of the model are you using?
@SG_161222 I use NMKD GUI 1.8.1
@SG_161222 This version was giving error:
realisticVisionV13_v13VAEIncluded.ckpt
I have also tired and converted the safetensors version into ckpt, but it was only generating random images.
@korner83 I have NMKD GUI 1.9.1 installed, now I will download the model and check it a second time. As soon as I check, I will answer you.
@SG_161222 Thanks for your prompt assistance, I really appreciate.
@korner83 I checked the model in NMKD GUI 1.9.1 and it works. I also read this line in the changelogs:
Improved: Model compatibility, any Safetensors file should now work fine after conversion.
You can try to upgrade to version 1.9.1 and write if it solved your problem :)
@SG_161222 Thanks for letting me know, I will install it and give you a feedback.
@SG_161222 Thanks for your help, with the new version it's working very well!
Hi everybody. Please respond if you are using the new version in the NMKD GUI or in InvokeAI. Do you have any mistakes?
First try with the pickle didnt work, it says the model appears to be incomplete, i tried the safetensor and converting it, but no luck :( it gives random picttures from bookpages,
Thankk you sooo much for your effort!!!
I used NMKD 1.9 , this is the log if it helps:
[00012747] [02-01-2023 16:49:28] Canceling. Reason: Process has errored: Failed to load model.
The model appears to be incompatible. - Implementation: InvokeAi - Force Kill: False
[00012752] [02-01-2023 16:49:29] Killing current task's processes.
[00012757] [02-01-2023 16:49:29] Canceled: Process has errored: Failed to load model. The model appears to be incompatible.
[00012776] [02-01-2023 16:49:29] ExportLoop END
[00012777] [02-01-2023 16:49:29] No images generated.
[00012778] [02-01-2023 16:49:29] SetState(Standby)
@Davilion I have NMKD GUI 1.9.1 installed, now I will download the model and check it a second time. As soon as I check, I will answer you.
@Davilion I checked the model in NMKD GUI 1.9.1 and it works. I also read this line in the changelogs:
Improved: Model compatibility, any Safetensors file should now work fine after conversion.
You can try to upgrade to version 1.9.1 and write if it solved your problem :)
@SG_161222 Thank you A lot i can confirm the latest update fixed the issue!!
Thanks for your effort !!
@SG_161222 Hey man! The model looks interesting, but I have some issues with the installation of inpainting version. I am using InvokeAI 2.3.0 (and someone else I talked to had similar issue on 2.2.5). I add the model and specify path to the custom config you provide, but when I try to activate the model I get an error which is shown in the end of my comment.
I have substituted your config with the basic "v1-inpainting-inference.yaml", and it kinda works. But I can imagine it's not operating as intended, right?
Please see the error message below. Thank you very much in advance!
** model realisticVisionV13_v13-inpainting could not be loaded: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config' Traceback (most recent call last): File "H:\Games\SD\invokeai\.venv\lib\site-packages\ldm\generate.py", line 889, in set_model model_data = cache.get_model(model_name) File "H:\Games\SD\invokeai\.venv\lib\site-packages\ldm\invoke\model_manager.py", line 106, in get_model requested_model, width, height, hash = self._load_model(model_name) File "H:\Games\SD\invokeai\.venv\lib\site-packages\ldm\invoke\model_manager.py", line 335, in loadmodel model, width, height, model_hash = self._load_ckpt_model( File "H:\Games\SD\invokeai\.venv\lib\site-packages\ldm\invoke\model_manager.py", line 428, in loadckpt_model model = instantiate_from_config(omega_config.model) File "H:\Games\SD\invokeai\.venv\lib\site-packages\ldm\util.py", line 92, in instantiate_from_config return get_obj_from_str(config['target'])( File "H:\Games\SD\invokeai\.venv\lib\site-packages\ldm\models\diffusion\ddpm.py", line 2227, in init super().__init__(*args, **kwargs) TypeError: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config'
I can confirm it generates random images. That confused me. I went over huggingface and used that ver and solved the problem for me
I checked the model in NMKD GUI 1.9.1 and it works. I also read this line in the changelogs:
Improved: Model compatibility, any Safetensors file should now work fine after conversion.
You can try to upgrade to version 1.9.1 and write if it solved your problem :)
This is of course if you use the NMKD GUI
PSA: Be careful with the provided suggested negative prompts, they were hindering some of my creativity on more creative stuff.
My fav model though
You don't need to say what you were generating, but I would like to know which negative prompts you avoided using. Thank you.
@dexxx I just generate people. I would recommend sometimes NOT using tags that affect body such as those in the second half of the Negatives suggestion as they may possibly interfere with getting unique looks.
I'm just saying don't always take Positive or Negative prompts as law as they may get in the way of creativity.
@TheCAL Oh, I understand. I've gone through this a few times with other models but this one had very few. Thank you for the tip.
@dexxx Just wanted to clarify for people, I meant body related negative prompts, the rest are truly good, even on any model. And the CGI/art/drawing etc. ones are preference.
Yeah, I spent the longest time trying to get something that looked like my friend. Then I remembered he has 12 fingers and a second tongue growing out of his ear. 😂
Is there an inpainting version of this?
Hi, there will be a inpainting model soon.
🔥 This is an extremely good model. I wonder how people fine-tune these models (like orig SD v1.5).
🤔 Do you use DreamBooth for fine-tuning? If so, all this new mini-dataset is learned with just a single new prompt?
Hi, I just used model merging :D
@SG_161222 Okay. But otherwise, how it's done? dream booth over a whole dataset?
@rahulbhalley There are many videos on YouTube explaining the work of Dreambooth. I used it to train a model on my face, 20 photos were enough to get good results. But to train a model on a variety of objects, you need to use a text description for each photo. It's better to watch the video, since I don't really understand this topic.
@SG_161222 Ahh, I get it. Thanks.
@SG_161222 can you tell what models were used?
@fbmac Hello. I used these models for mixing: HassanBlend 1.5.1.2, Protogen x3.4, Analog Diffusion and, perhaps HASDX and RealEldenApocalypse. That's all I remember. The model was originally for individual use.
@SG_161222 Testing the recommend negative prompt and some from samples, in automatic1111 I get "Warning: too many input tokens; some (56) have been truncated:" Did you generate them with a different repo, is there a way round this?
Hi, I'm using the following settings.
Settings -> Interrogate Options
Interrogate: minimum description length (excluding artists, etc..) = 5
Interrogate: maximum description length = 256
Maybe that's the point.
@SG_161222 THANK YOU! Mine was 24 and 48 I usually just delete the ones with least impact now I can try this and leave them in. I also added the upscaler you use to try that out as I usually use remacri or universal.
@SG_161222 I tried to reproduce some of the images just to be sure and then compare the results between Realistic and deliberate for interest. I am using 1.2 but in theory 1.3 images should work right? Only a VAE added (which one?). This image I could not make the same even when adding same upscaler etc. - https://civitai.com/gallery/64096?modelId=4201&modelVersionId=6987&infinite=false&returnUrl=%2Fmodels%2F4201%2Frealistic-vision-v13
I might have to grab 1.3 and test see if it making images a bit different, it should be the same right?
p.s. The setting change fixed the input token warning.
@creeduk Hi. vae-ft-mse-840000-ema-pruned.ckpt was included in the model.
If you use version 1.2 and try to recreate images from version 1.3 with it, you will get a very different result.
@SG_161222 OK good to know, so there are more changes than just the vae then. I think I might have confused another model where the next version was ONLY adding vae. You did post some differences so1.3 has different/more training :)
Still a useful exercise as the prompting and general image look remains so good for testing.
Great model!
Does anyone have any suggestions on getting it to render a man and a woman together, even weighting heavily towards man in prompt, i get 2 women about 60-70% of the time. Otherwise incredible model!
Hi! You can try this template: "amateur photo of 1man and 1woman"
https://postimg.cc/gallery/k5VL9rx
Great model! Anyone has any tips on how to use it with dreambooth as a base model?
Hello, sorry for the question. I am new with Automatic1111. How can install the Hires. fix with R-ESRGAN General WDN 4xV3 upscaler to show in the setup? I was using other upscaler and notice needs more clarity picture
Details
Files
realisticVisionV60B1_v13.ckpt
Mirrors
4201_realisticVisionV20_v13.ckpt
realisticVisionV60B1_v13.ckpt
realisticVisionV60B1_v13.ckpt
Realistic_Vision_V1.3.ckpt
realisticVisionV13_v13VAEIncluded.ckpt
realisticVisionV13_v13.ckpt
realisticVisionV13_v13.ckpt
Realistic_Vision_V1.3.ckpt
realisticVisionV13_v13.ckpt
realisticVisionV13_v13.ckpt
realisticVisionV13_v13.ckpt
realisticVisionV13_v13.ckpt
27_realisticVisionV20_v13.ckpt
realisticVisionV13_v13.ckpt
realisticvision.safetensors
realisticVisionV13_v13.ckpt
realisticVisionV60B1_v13.safetensors
Mirrors
realisticVisionV60B1_v13.safetensors
Mirrors
aiKorea_sd15.safetensors
realisticVisionV13_v13.safetensors
4201_realisticVisionV20_v13.safetensors
realisticVisionV60B1_v13.safetensors
realisticVisionV60B1_v13.safetensors
29_realisticVisionV20_v13.safetensors
realisticVisionV13_v13.safetensors
realisticVisionV13_v13.safetensors
realisticVisionV13_v13.safetensors
rlstc_vsnV1_3.safetensors
realistic_visionv13V13.safetensors
realisticVisionV13_v13.safetensors
realisticVisionV13_v13.safetensors
realisticVisionV13_v13【C站室内设计】.safetensors
realisticVisionV13_v13.safetensors
realviv13.safetensors
realisticVisionV13_v13.safetensors
realisticVisionV13_v13.safetensors
realisticVisionV13_v13VAEIncluded.safetensors
realisticVisionV20_v13.safetensors
realisticVisionV20_v13.ckpt
realisticVisionV13_v13.safetensors
realisticVisionV13_v13.safetensors
realisticVisionV13_v13.safetensors
realisticVisionV13_v13.safetensors









