CivArchive
    Preview 333323
    Preview 334107
    Preview 334752
    Preview 341670

    Check my exclusive models on Mage: ParagonXL / NovaXL / NovaXL Lightning / NovaXL V2 / NovaXL Pony / NovaXL Pony Lightning / RealDreamXL / RealDreamXL Lightning

    Recommendations for using the Hyper model:
    Sampler = DPM SDE++ Karras or another / 4-6+ steps
    CFG Scale = 1.5-2.0 (
    the lower the value, the more mutations, but the less contrast)

    I also recommend using ADetailer for generation (some examples were generated with ADetailer, this will be noted in the image comments).

    This model is available on Mage.Space (main sponsor).
    You can also support me directly on Boosty.

    Realistic Vision V6.0 (B2 - Full Re-train) Status (Updated: Apr. 4, 2024):
    - Training Images: +3400 (B1: 3000)
    - Training Steps: +724k (B1: 664k)
    - Approximate percentage of completion: ~30%

    All models, including Realistic Vision (VAE / noVAE) are also on Hugging Face

    Please read this! How to remove strong contrast.

    To make the image less contrasty you can use LoRA [Detail Tweaker LoRA] in a negative value.

    Orange Color = Optional

    I use this template to get good generation results:

    Prompt:

    RAW photo, subject, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3

    Negative Prompt:

    • (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, UnrealisticDream

    • (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation, UnrealisticDream

    Euler A or DPM++ SDE Karras

    CFG Scale 3,5 - 7

    Hires. fix with 4x-UltraSharp upscaler

    Denoising strength 0.25-0.45

    Upscale by 1.1-2.0

    Clip Skip 1-2

    ENSD 31337

    Thanks to the creators of these models for their work. Without them it would not have been possible to create this model.

    HassanBlend 1.5.1.2 by sdhassan

    Uber Realistic Porn Merge (URPM) by saftle

    Protogen x3.4 (Photorealism) + Protogen x5.3 (Photorealism) by darkstorm2150

    Art & Eros (aEros) + RealEldenApocalypse by aine_captain

    Dreamlike Photoreal 2.0 by sviasem

    HASDX by bestjammer

    Analog Diffusion by wavymulder

    WoopWoop-Photo by zoidbb

    Life Like Diffusion by lutherjonna409

    AbsoluteReality by Lykon

    CyberRealistic by Cyberdelia

    Analog Madness by CornmeisterNL

    A-Zovya Photoreal by Zovya

    ICBINP - "I Can't Believe It's Not Photography" by residentchiefnz

    epiCRealism by epinikion

    Juggernaut by KandooAI

    Description

    Recommended for use with VAE (to improve the quality of the generation and get rid of blue artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original

    FAQ

    Comments (216)

    dilletantishMar 26, 2023· 2 reactions
    CivitAI

    You recommend using the VAE (https://huggingface.co/stabilityai/sd-vae-ft-mse) with version 2. How do we convert that file "diffusion_pytorch_model.bin" to something Automatic1111 will read/load? I have the standard vae-ft-mse-840000 file that I use with most models. Is this the same as the one you recommend, or different? Thank you! :)

    SG_161222
    Author
    Mar 26, 2023· 3 reactions

    Oops, I put the wrong link. Yes, it should be vae-ft-mse-840000.

    dilletantishMar 26, 2023· 1 reaction

    @SG_161222 Thanks for the quick reply. Love your model and appreciate your work. :)

    leiyangcl781126Apr 1, 2023

    @SG_161222 So, how about diffusion_pytorch_model.bin should we load it in Automatic1111? It mentioned in huggingface.co

    SG_161222
    Author
    Apr 1, 2023

    @leiyangcl781126 Hi! I updated the link (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main), you need to download a file called "vae-ft-mse-840000-ema-pruned" (ckpt or safetensors). Then you need to put this file in the following path: Automatic1111/models/VAE. Then you run Automatic1111, go to Settings -> Stable Diffusion and from the "SD VAE" dropdown list, choose that VAE file and then "Apply Settings". That's all you need to do.

    leiyangcl781126Apr 2, 2023

    @SG_161222 Thank you very much, I am going to have a try, So the diffusion_pytorch_model.bin DOES NOT requird? Because I searched in my Automatic 1111 folders, I suspected I made some mistake during installation.

    dilletantishApr 2, 2023· 1 reaction

    @leiyangcl781126 He was saying that the vae-ft-mse-840000 (the updated link) is used INSTEAD of the diffusion_pytorch_model.bin. :)

    SG_161222
    Author
    Apr 2, 2023

    @leiyangcl781126 Only "vae-ft-mse-840000-ema-pruned" is needed.

    carolinavolodinaMar 27, 2023· 3 reactions
    CivitAI

    What's the optimal training process for this model using dreambooth for custom face? I did 2000 steps with 17 images. But the result wasn't as good as in version 1.3 where I used the same settings and pictures.

    jonesaidMar 27, 2023· 1 reaction
    CivitAI

    What's new in v2.0?

    SG_161222
    Author
    Mar 27, 2023· 5 reactions

    Slightly improved anatomy, improved skin texture, improved shadows.

    44071Mar 27, 2023· 1 reaction
    CivitAI

    Is this the same as 1.4 Beta if we downloaded that should we switch to 2.0?

    44071Mar 27, 2023

    @SG_161222

    44071Mar 27, 2023

    @SG_161222

    SG_161222
    Author
    Mar 28, 2023· 2 reactions

    @anonfaker Hi, sorry for the late reply. I couldn't use CivitAI properly yesterday to reply. If compare 1.4 and 2.0, 2.0 will be better and it is not the same. It's up to you to choose 1.4 or 2.0. In 2.0 the anatomy, skin texture, shadows have been improved.

    44071Mar 28, 2023

    @SG_161222 thanks for the clarification!

    FansMar 28, 2023· 1 reaction
    CivitAI

    According to your recommendation,https://huggingface.co/stabilityai/sd-vae-ft-mse-original ,the file I downloaded is a ckpt file. How can I use it in conjunction with the Realistic Vision V2.0 model?


    SG_161222
    Author
    Mar 28, 2023

    Hi, what GUI do you use? In my case, I use Automatic1111.

    nickypine251Apr 4, 2023

    @SG_161222 what if i use webUI?

    SG_161222
    Author
    Apr 4, 2023· 4 reactions

    @nickypine251 I think that webUI is Automatic1111. You need to download a file called "vae-ft-mse-840000-ema-pruned" (ckpt or safetensors). Then you need to put this file in the following path: Automatic1111 (webui)/models/VAE. Then you run Automatic1111 (webui), go to Settings -> Stable Diffusion and from the "SD VAE" dropdown list, choose that VAE file and then "Apply Settings". That's all you need to do.

    nickypine251Apr 7, 2023· 1 reaction

    @SG_161222 Appreciated it!

    fd_yMar 28, 2023· 1 reaction
    CivitAI

    Really liking your work there with the V2 and what others have done with it so far in the comments.

    Curious though, while I've done some rather easy "baking in" of vae.pt files before in Automatic, how does the process work for the V2 model with the recommended .ckpt/.safetensor files from the huggingface link?

    I don't suppose it's....taking model A/B, A being the model and B the vae.ckpt, no interpolation and copying config from B, or?

    SG_161222
    Author
    Mar 29, 2023· 1 reaction

    Thanks for the review of the model :)

    The process of baking VAE into the model is simple (I used Automatic1111): https://postimg.cc/mtCRnpz4

    cbr1Mar 29, 2023· 2 reactions
    CivitAI

    noob question:

    why does 1.3-inpainting needing a .yaml config file but 2.0 doesn't?

    SG_161222
    Author
    Mar 29, 2023· 1 reaction

    Hi! As it turns out, the .yaml file is not needed at all for SD1.5 based models. Therefore, this file is not needed for both 1.3 and 2.0.

    Alex225Mar 29, 2023· 2 reactions
    CivitAI

    hands trein please!!

    SG_161222
    Author
    Mar 29, 2023· 5 reactions

    Hi! I will try to improve the quality of hands generation in the future.

    Alex225Mar 29, 2023· 2 reactions

    @SG_161222 thanks friend!

    Alex225Mar 29, 2023· 2 reactions
    CivitAI
    why when i use your model to train my face the file is 1.99gb? I use fastdreambooth
    SG_161222
    Author
    Mar 29, 2023· 1 reaction

    Perhaps the model is saved in fp16 and not in fp32.

    Alex225Mar 29, 2023· 1 reaction

    @SG_161222 thanks!

    xhorxhiMar 29, 2023· 19 reactions
    CivitAI

    For some reason only this ckpt is returning

    "NansException: A tensor with all NaNs was produced in Unet."


    I tried adding --no-half or --no-half-vae or --disable-nan-check

    but nothing helps

    How can I make this work? All other ckpt I tried from civitai work fine

    LostIdeasMar 29, 2023

    Yeah I've gotten the same error on all the versions, not sure why.

    IVANINAVIMar 29, 2023

    same error

    MegachillMar 29, 2023· 2 reactions

    same issue 4090rtx

    warwolf1951485Mar 29, 2023

    same here also got this on Aly's mix as well, mine won't even generate a image

    djlostsamuraiMar 29, 2023

    same error

    anonguy28Mar 30, 2023

    Same error, version 1.3 works fine.

    frameshifterMar 30, 2023

    Same here

    Swamii7Mar 30, 2023

    I too am unable to generate any images with this model. Im using Draw Things app on mac and it starts but then almost immediately stops generating.

    175737Mar 30, 2023

    Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion

    Apply Settings and Reload UI

    frameshifterMar 30, 2023

    @ovalshrimp No joy.

    warwolf1951485Mar 30, 2023

    @ovalshrimp did not work

    Swamii7Mar 30, 2023· 3 reactions
    CivitAI

    Cannot get any generations at all, tried all three versions. similar to xhorxhi and a bunch of others that replied to them. Im using Draw Things app on Mac, plenty of juice under the hood M1 Ultra. each version just stops generating after about a second.

    Tried on multiple versions of Draw Things too.

    Version 1.20230318.0 (1.20230318.0)

    Version 1.20230305.0 (1.20230305.0)

    Have never had a model that wouldnt generate. Tried to redownload your models again and reinstalled, got same issue. 😭 bummer. Hope somebody can figure it out.

    Swamii7Mar 30, 2023· 1 reaction

    seems its just a download issue from civitai , will redownload and try again

    From the mod at Civitai on discord

    "We just resolved an issue affecting files downloaded in the last 4-5 hours. If you're experiencing issues with things you've downloaded, please download them again now. To the individuals that jumped in to bring this to our attention, thank you!"

    AstreaPixieMar 30, 2023· 2 reactions
    CivitAI

    I'm having big error in console with 2.0, cant generate anything with it

    SG_161222
    Author
    Mar 30, 2023· 1 reaction
    CivitAI

    Hi all!
    Who has the error "NansException: A tensor with all NaNs was produced in Unet.", please try to download the model from Hugging Face (
    https://huggingface.co/SG161222). I'm downloading 2.0 right now to check.
    Also what could be the problem is discussed here:
    https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6923#issuecomment-1489520816

    UPD1: very strange, but I downloaded 2.0 and 2.0-pruned and it works for me, maybe the problem is solved.

    ShashinSugoiMar 30, 2023

    I have that exact error with v2.0-inpainting

    SG_161222
    Author
    Mar 30, 2023

    @ShashinSugoi Try downloading from Hugging Face. That should work.

    719289Mar 30, 2023

    Thank you for providing this info. Was confused by the sudden failures.

    huggingface works for me, but not civitai, even after they said the issue with the corrupted downloads was resolved.

    SG_161222
    Author
    Mar 31, 2023

    @rqv6hs+9f1h8owhwa4rk719 I re-uploaded version 2.0 to the site, so it should work correctly now. I will take care of version 1.3 after my work.

    DoctorTrollApr 5, 2023

    Hey! What does "ENSD 31337" mean?

    Swamii7Mar 30, 2023· 1 reaction
    CivitAI

    Seems to be a problem on civitai end according to their discord.

    "We just resolved an issue affecting files downloaded in the last 4-5 hours. If you're experiencing issues with things you've downloaded, please download them again now. To the individuals that jumped in to bring this to our attention, thank you!"

    rasterfixateMar 30, 2023· 1 reaction
    CivitAI

    How do you find a particular version of the checkpoint? Most of these reference model hash: c35782bad8 but after downloading ALL of them, none of the hashes match.

    HexusMar 30, 2023· 2 reactions

    That hash matches realisticVisionV13_v13VAEIncluded.safetensors, which I have a copy of, but it appears to have been removed from the available downloads. I'm not sure why.

    For a while there were a series of checkpoints that included the VAE. With 2.0's release, now I'm just using it with the VAE configured in Automatic1111 again.

    I found the embedded VAE convenient, because I didn't have to enable/disable the VAE in settings for different checkpoints, remember which ones needed it, etc.

    Either way, you'll be able to get the same results as this checkpoint if you use Realistic Vision 1.3 with the VAE configured.

    SG_161222
    Author
    Mar 31, 2023· 1 reaction

    Versions 1.3 / 1.4 / 2.0 are also on Hugging Face (https://huggingface.co/SG161222).
    I will re-upload the model on CivitAI.

    rasterfixateMar 31, 2023

    @SG_161222 Thanks. I'll keep an eye out for the upload. The files at huggingface don't have the matching hash.

    rasterfixateMar 31, 2023· 1 reaction

    @Hexus I've been using the  vae-ft-mse-840000-ema-pruned.ckpt VAE and they're still way way off. Hm.

    roseyyyai2022Mar 31, 2023· 2 reactions
    CivitAI

    Decent.... But it's still a little tooo messed up with animals, especially when I try to generate a human holding some kind of pet of theirs, like a hamster, rabbit, cat... Do I have to use the trigger keywords btw?

    SG_161222
    Author
    Mar 31, 2023

    Thank you for your comment! Keywords are optional.

    I will try to fix this problem in the future.

    SepantmanApr 1, 2023· 1 reaction
    CivitAI

    my images are black, all of them, please help

    SG_161222
    Author
    Apr 1, 2023

    Hi! Please tell me which version of the model you are using? You can also try to use the model from here: https://huggingface.co/SG161222

    SepantmanApr 2, 2023

    @SG_161222 hi! i'm using realisticVisionV20_v20.safetensors

    SG_161222
    Author
    Apr 2, 2023· 1 reaction

    @Sepantman There was a failure on CivitAI which corrupted the model, I have already re-uploaded the model to the site. Have you had the same error with other models?

    SepantmanApr 2, 2023· 1 reaction

    @SG_161222 no sir, i downloaded the same model from the link that you provided and its working like a champ, thanks a lot for support and the model

    whoknows1Apr 5, 2023

    What problem do you have with black models

    2988Apr 1, 2023· 1 reaction
    CivitAI

    Great realistic model, better than most, the main thing I think that could be improved in future versions is the skin textures, this model tends to make the skin too smooth looking for my taste.

    I have examples of using the same prompt on basil mix vs this model and the basil mix has more skin detail such as freckles etc.

    Other than that it does great images!

    SG_161222
    Author
    Apr 1, 2023· 2 reactions

    Hi! Thanks for your comment! I will consider your wish in a future update :)

    pillory04Apr 6, 2023· 1 reaction

    Yes! This is my go to checkpoint as I get the best results using it but the skin is too perfect, it's like there's Instagram filter applied.

    OnvisiApr 9, 2023

    I've found that to be true of every model I've used, do you have recommendations for models that don't airbrush everything?

    FreewillApr 3, 2023· 5 reactions
    CivitAI

    Hi,

    Is it legal to use the generated human portraits/faces commercially if i credit to you and the model name?

    SG_161222
    Author
    Apr 3, 2023· 8 reactions

    Hi, you can use the generated images for commercial purposes. You do not have to specify the author of the model and the name of the model.

    FreewillApr 3, 2023· 1 reaction

    @SG_161222 Thanks!

    megamakerApr 7, 2023

    you dont have to credit no one

    cvision1777460Apr 4, 2023
    CivitAI

    NansException: A tensor with all NaNs was produced in VAE. Use --disable-nan-check commandline argument to disable this check.

    anhelmlife439Apr 10, 2023

    re-download the model or switch to another model and then switch back

    NektarineApr 5, 2023· 1 reaction
    CivitAI

    Is it possible to train with 768p images?


    Hi, I wanted to know if this model was trained in 768p or 512p. Because I have a dataset in 768p and I didn't know if I should train it in that resolution or downgrade it to 512p. Thanks!

    SG_161222
    Author
    Apr 8, 2023

    Sorry for the very long delay in replying.
    Yes, you can train the model in 768p.

    kaiching0108Apr 5, 2023· 4 reactions
    CivitAI

    is there any prompt can let portrait and background both focus? i have tried (blurry background) in Neg. prompt but not work.

    BillBleikApr 7, 2023

    don't really know about this ckpt exactly, but try putting a big lens like 120mm or above, sharp focus, stuff like that in the prompt. Putting blurry ,DOF, depth of field, unsharp(?) in negative will probably help.

    grafikzeugApr 7, 2023

    In photography, a large aperture number (= small aperture) usually creates a deeper focus. Try adding "f22, f16, focus stacking, small aperture" to positive prompts and "bokeh, depth of field, dof, narrow focus, blurry, out of focus" to negative prompt.

    92056Apr 12, 2023

    It seems the model has been trained on portrait with a blurry background. I tried with all the prompts, but no success so far.

    kaiching0108Apr 13, 2023

    me too, not success, seems portrait always has blurry bg

    ertrt499Apr 8, 2023· 2 reactions
    CivitAI

    Model is fantastic for a lot of shit.

    Utter trash for sex acts, artifacts a lot.

    SG_161222
    Author
    Apr 8, 2023

    Thanks for the comment! I'll be fixing that in future updates!

    OnvisiApr 9, 2023· 1 reaction
    CivitAI

    Ignorant question: I understand what trigger words do with Loras, but what do trigger words do with a ckpt? Does your listing "nsfw" as a trigger word mean if I put "nsfw" it will be particularly likely to make an nsfw image? Or something other than that?

    15604Apr 14, 2023

    the words you put are represented in a "latent space" where similar things are next to each other. Its a bit complicated to understand, but luckily, its very intuitive to use. Just ask for what you want. big ol tiddies? "big breasts". in a field? "in a field". Best thing to do is read others prompts

    OnvisiApr 18, 2023

    @auserofthisname I think you're answering the question "how do SD models work" but what I asked was what role trigger words have for checkpoints as opposed to loras. Four trigger words are listed for Realistic Vision above, and I'm curious about whether that means those words do something special over and above normal prompting, or not.

    newpollutionApr 20, 2023

    @Onvisi trigger words are tokens the model has been specifically trained on. So this model might not respond to 'Nikon retro' as well as 'analog style' because the training class/instance captions specifically included 'analog style'. Being part of the model, they're less specific than LoRAs, but you can still weight the trigger words with parentheses or decimal values.

    Picky_FlickyApr 10, 2023· 1 reaction
    CivitAI

    Can anyone explain what exactly a Lora is and how they are used? Other than the explanation of a quickly trained model as I don't really get that answer. I see some really cool images that have Lora-stuff in them, but I'm not sure what their purposes are...thanks!

    gwooflexApr 10, 2023

    Im not the most knowledgeable on this but loras are essentially smaller trained models that you can add on top of prompt. They are topics ranging from specific people to art styles that the checkpoint you're using isn't trained one. You download them and add them to your webui model folder

    Gamemaster2022Apr 10, 2023

    LoRAs are simply mini-models that are trained on specific data. If you are interested in their training or simply AI stuff, I made tiny discord server focused on this kind of stuff. Let me know if you are interested

    Syn_tzuApr 12, 2023

    @IAmSteve what's your discord?

    certifiedDocApr 13, 2023· 2 reactions

    Loras are small networks that are trained on specific subsets or aspects of images. They work more or less like a normal stable diffusion model, but they're far more specific. They go in the models/lora folder in stable diffusion. You use them by adding their tag to your prompt.

    Lora tags look like this: <lora:exact_name:1.0>, where exact_name is the name of the file for your Lora (excluding the file extension), and the number is how much you want it to affect the results. You can find plenty of examples of such tags on images here.

    Automatic's webui also lets you click on Loras instead of typing them in. They show up when you click the Show/Hide Extra Networks button above the Styles drop-down.

    Gamemaster2022Apr 23, 2023

    @Syn_tzu Gamemaster2022#2506

    orochimaruApr 10, 2023· 1 reaction
    CivitAI

    Link for Full Model fp16 (3.59 GB) is incorrect and it returns only safetensor

    darko987darko568Apr 12, 2023· 2 reactions
    CivitAI
    Does anyone know where to find the .yaml config file? it works better for me
    fox23vang226Apr 13, 2023· 2 reactions
    CivitAI

    This is my favorite ckpt model. It generates the most beautiful and realistic women Ive found so far of 7 other models Ive downloaded.

    ElmundoApr 14, 2023
    CivitAI

    Hi, i got some hair problem...
    Let me explain ; everytime i blur somewhere near the hair, the ai then proceed to put way too much hair in the result... I tried some prompt but no good result, any idea please ?

    RomangeloApr 14, 2023· 1 reaction
    CivitAI

    This thing seems to love making half body portraits. How can I stop it from doing that? Like 80% of generated images are half body portraits.

    AlephGatesApr 15, 2023

    Are you doing txt2img (vs img2img)? What dimensions are your images set at (512x512 or 512x768 etc)

    RomangeloApr 15, 2023

    @alephgates103 txt2img 768x1024.
    I added (full body:1.6) or 1.8 and it seems to help a little.

    PandoraChaosApr 14, 2023· 2 reactions
    CivitAI

    This model is by far the best for realistic women. Not sure if it is a stable diffusion thing but it seems to avoid creating feet & heels

    godracoApr 15, 2023· 1 reaction
    CivitAI

    Thank you very much for the good model. That's cool. Can you tell me how to make a checkpoint file? I would appreciate it if there is a related link address.

    SG_161222
    Author
    Apr 19, 2023

    Hi! To create this model, I used the merge method with other models in Automatic1111.

    godracoApr 20, 2023

    I see. Thank you for your reply.

    rwr2qsApr 15, 2023· 2 reactions
    CivitAI

    How do we get the config file?

    SG_161222
    Author
    Apr 19, 2023

    Hi! This model is based on SD1.5 which does not need a config file.

    torealiseApr 15, 2023
    CivitAI

    How can I make my own model like this? What tools should I use?

    joao131607910Apr 17, 2023· 3 reactions

    stable difusion

    praxis22Apr 16, 2023· 1 reaction
    CivitAI

    At this point it appears you need to be a photographer, just as early versions required you to be an artist. More stuff to learn. Great model btw.

    zoooz77Apr 18, 2023
    CivitAI

    Which one is give more realistic images, inpainting version or non-inpainting?

    And why realisticvision v2.0 doesn't give crisp and high quality images like before?

    SG_161222
    Author
    Apr 19, 2023· 1 reaction

    Hi! I recommend that you use the "non-inpainting" version for generating images.

    Maybe you need to use the VAE which I specified in the description of version V2.0. VAE gives better contrast and sharpness of the generated image.

    zoooz77Apr 20, 2023

    @SG_161222 Which one (VAE), safetensors or ckpt?

    And what inpainting version used for?

    SG_161222
    Author
    Apr 20, 2023

    @zoooz77 I use VAE in .safetensors format.

    Inpainting models are used for inpaint and outpaint operations.

    zoooz77Apr 20, 2023

    @SG_161222 why i can't generate high quality pictures? it's looks low quality pictures & a bit blurred, any advices?

    liujianzuo888888947Apr 19, 2023
    CivitAI

    如果有亚洲人的就更完美了。会更厉害点。

    KJohnnyApr 19, 2023

    试试增改关键字

    zhx119911Apr 19, 2023

    这个能拿来图生图吗?我局部重绘总是和原本的肤色不一样,urpm就没这个问题。。啥情况

    zhx119911Apr 19, 2023
    CivitAI

    i don't know why but the color of the skin just always differ from the original character in img2img mode.URPM can do it but i want try different checkpoints.

    局部重绘人物皮肤颜色总是偏色,有人有办法吗?URPM就没这个问题

    loafkeeperApr 20, 2023· 2 reactions
    CivitAI

    This model says to use camera brands and lens sizes, but in my experience all that does is add that text or camera lenses to the photo. This would explain why a most of my first gens had text even with plenty of text negative filters, including the words "Nikkon", "RAW", and "Fuji" on shirts or signs in the generated image. This model seems to handle very brief prompts better than adding all of the usual (best quality) mumbo jumbo.

    parallelepipedonApr 25, 2023

    I just had this happen for the first time with v1.4 of this model! I got a telephoto lens or a camera body in some of my gens. I used "Nikkor" and "Nikon" in my prompt. I used to always get Hiper, Hipper, etc as text when I used a friend's prompts with "Hyperrealistic". I always wondered if that meant it didn't understand it. But I've seen some weird typos like "3555mm". We never had telephotos THAT long in my day. 🤣 Nor negatives that large. I guess it's 35mm and 55mm squished together. I never see that on a shirt though. 🧐

    ElectrovertedApr 25, 2023· 4 reactions

    I did extensive testing on detailed photography gens using a comic book character lora and as many "realistic" prompts as I had to make the character look real. Here's the ones that worked very well: high quality photograph, high skin detail.

    I tested over a dozen HD prompts, and yep, just those two made the biggest difference. So aim "high"

    parallelepipedonApr 26, 2023

    @Electroverted Excellent! "high" makes much more sense than "hyper". Did your extensive testing happen to involve placement within the prompt? Using it up front vs. middle or at the end?

    vlkApr 29, 2023

    I've been experimenting with lens names in the prompt. "Petzval", "Helios 44-2" and "Holga" have a noticeable effect.

    ElectrovertedApr 29, 2023

    @parallelepipedon I'm sure placement makes a difference, but that would be a lot of testing! From what I hear, start and end prompts have more strength. My detail prompts are at the end.

    blu3bugApr 21, 2023· 3 reactions
    CivitAI

    Korean company called "sporki" appears to be using your model to profit themselves.

    nickhguurApr 22, 2023· 2 reactions

    Well, his model is the work of many different models.

    Plus, some of the checkpoints have been trained with copyrighted material

    wyeelApr 22, 2023

    Wait how do you know that?

    Scooter69Apr 27, 2023

    the license information and what is allowed and what is not allowed is located in the bottom right under that download box area.

    pervyguyailolApr 21, 2023
    CivitAI

    hi is it possible get a full body view with this model? i never get full body with head body legs and feet, the feet are always missing.

    DoNotReplyApr 22, 2023

    same issue and question. i am also hoping for someone to shed some "light" into this question! :)

    fancypixartApr 22, 2023

    Write in the prompt, for example: feet shod in strappy heels, etc.

    certifiedDocApr 24, 2023· 1 reaction

    More detail in your prompt can usually help include what you want. Adding "feet" to the prompt will improve the chances that they're included.

    If part of the image is still cut off, you can use Outpainting to add it back in. With Automatic, you can set one of the img2img Outpainting scripts to extend in the direction that's missing.

    OpenPose + ControlNet works every time, guaranteed. You can also use revAnimated to generate a full-body pose, then run it through img2img + this model.

    893900May 4, 2023· 2 reactions

    set your size to 512/768 you will be more likely to get a fully body image.

    AiShawArtMay 7, 2023

    add: ((full_body)) to your positive prompt.

    add: ((out of frame)) to your negative prompt.

    loafkeeperMay 8, 2023· 1 reaction

    Change the resolution of the generated image. When I switched from 512x512 to 512x786 I started getting more full body framing. Also, include at the beginning "full body photo of xyz". I do batches of 8 and probably 80% will be full body.

    DoNotReplyMay 11, 2023

    @loafkeeper 512x786 is a good tip; thanks, it def helped generate images that are in portrait orientation!

    lulaman569Apr 23, 2023· 11 reactions
    CivitAI

    This thing is hard to prompt. initially only made asian people, it took me hours to figure out that I had to include "asian" in the negative. I'm almost giving up on this model.

    UnionJackedApr 24, 2023· 10 reactions

    And you never thought of adding the nationality you want as a positive? Example "An English Male" or "An African woman" .... I cannot believe you're moaning about this when it's pretty simple.

    lulaman569May 6, 2023· 4 reactions

    Forget it, I was too hasty, I'm using it and it's great. I apologize for the negative comment.

    limblessgirl4May 7, 2023

    I don't ever prompt for nationality, and I am getting mostly caucasian.

    1331711Apr 24, 2023· 8 reactions
    CivitAI

    I really liked v 1.3 and v 1.4 -- my most used and favorite models so far.

    v 2.0 is giving me women with less feminine and more androgynous/manly features (thick necks, broad torso/shoulders, squarer jaws) and also exaggerating/adding childish features on adults (de-aging adults faces) to appear pre-pubescent.

    I do not like these changes in v 2.0

    EZorgApr 25, 2023· 3 reactions
    CivitAI

    This is pretty good at landscapes as well. Done some great ones for my Cthulhu campaign!

    ElectrovertedApr 25, 2023· 3 reactions
    CivitAI

    This is my primary photorealism model. But I feel like the female body types need more variety. The thick categories aren't very thick. Unless I'm doing something wrong?

    Try using ControlNet with it using the appropriate input image. That should help.

    stableeMay 1, 2023
    CivitAI

    I can't get this to work. Any prompt I use just gives me a pixelated picture with nothing on it. I have vae, but it doesn't do anything. Any ideas?

    KiefstormMay 3, 2023

    Do you have any error message in your console? For instance this just happened to me when controlnet was enabled but no pic was present, there was an error pointing me to the problem in the console

    JS_NODOSMay 4, 2023· 1 reaction

    Did you enabled the recommended VAE?

    coolnessityMay 8, 2023

    Because you are using Intel HD 4000 GPU bro

    phillabaule359May 13, 2023

    you probably have an embedding at 15 instead of 1.5. Then no matter what you do you onna get pixel or ... spaguettis ^_^

    90SMay 6, 2023· 13 reactions
    CivitAI

    When I type 1girl, the resulting images are all little girls around 6 years old, which is terrible

    AiShawArtMay 7, 2023· 3 reactions

    So, modify your prompts to remove this issue.

    You are responsible for what is inputted into the model for creation. You need to be more specific.

    timgasper0804307May 7, 2023· 1 reaction

    girL means female chil, say woman or teen18+

    cactusmanMay 7, 2023· 2 reactions

    1girl is a booru tag, its only relevant for anime models or models that imitate booru tags. In this one use "woman" if you want to generate an adult, typing in an age can help too if you want to be more specific. Check other images posted on this page to get an idea of how to prompt for a model like this.

    90SMay 7, 2023

    @AiShawArt I'm not a newbie, I'm just testing this model. In my experience with other realistic models, the images entered into 1girl's production are almost always pictures of girls over the age of 16

    90SMay 7, 2023

    @cactusman I know that typing other words or adding some negatives can generate mature girls, but I haven't encountered this in other models: six-year-old girls generated by the word 1girl

    AntaresDahaMay 10, 2023

    Well, you expect that outcome because the other models are biased. A GIRL is a child a WOMAN is an adult. Those words aren't even ambiguous. So a prompt for a girl creating an actual girl is more accurate, don't blame this model for creating a girl, blame the other models for being so incredibly (porn) biased that asking to generate a girl doesn't generate a child.

    kusogeMay 10, 2023

    @90S and those realistic models mostly suck on generating different ages.

    DoomenatorMay 10, 2023· 2 reactions

    Try to use "child", "childish", "underage", "loli", etc... in the negative prompts. Do it literally all the time no matter what. It's pretty jarring when you forget and you get weird stuff in the outputs specially if you are doing NSFW stuff and you really don't want that to happen.

    Also I've noticed "girl" gets young-looking people even if I specify something like 25 years old sometimes. So try to use the word woman instead. Not all models have this issue but these tips should help

    abadoxxaMay 12, 2023· 1 reaction
    CivitAI

    wow I thought chillout was the best model and then I found this

    MrUreganchoMay 14, 2023
    CivitAI

    I keep getting multiple people partially fused together. saying "1 girl" or "1 man" in the prompt appears to do nothing

    InoSimMay 16, 2023

    Add fused bodies in negative prompt with a good weight like 1.3 or 1.4 and in parenthesis.

    WeirdDreamerMay 24, 2023

    Model based on BooruDatasetTagManager, so no need space

    vakhariaraj27280May 15, 2023
    CivitAI

    I always get cloning figures, even when I write it in negative prompts, Please help.

    InoSimMay 16, 2023

    Can you show an example ? i have not any problems with this model.

    hihhihMay 22, 2023

    randomize the seeds

    Doris_umaMay 17, 2023
    CivitAI

    Does anyone know how much data this model used to train?such as how many and how many steps were used to train the model, and the size of the images used to train it?(translated)

    WeirdDreamerMay 24, 2023· 1 reaction

    Tested with 2.4K images and 1024,1024 size, default kohya_ss settings and batch 2
    I would say I managed to improve quality, but failed with persons (too much similar faces)
    So I will replace some images fore sure..

    Doris_umaMay 30, 2023

    @pastuh Thank you for the data help you provided, I used 512*512 size,other training data are commonly used,but the trained model generated pictures are very blurry. does a 1024*1024 size to make the effect better? But my computer doesn't seem to support me to validate my ideas,so ...(translated)

    Doris_umaMay 18, 2023· 1 reaction
    CivitAI

    Hi!SG 161222,I hope that you who made this exquisite models will see my problem.Can I ask how many images and how many steps were used to train the model, and the size of the images used to train it?please!(translated) (´▽`ʃ♡ƪ)

    SG_161222
    Author
    May 18, 2023

    Hi! This model is the result of merging a large number of models. So I can't tell you about the number of steps and the size of the images used for the training.

    Doris_umaMay 19, 2023

    @SG_161222Thank you very much for your honest answer~I'm currently trying to learn stablediffusion and trying to train beautiful models, but the results are not satisfactory。I wondered if I was having a problem with the number of image sets I used for training, because I only used a hundred images and the outputs were very scary.I don't know who can answer my question, maybe a thousand pictures can solve it? I will continue to try. Thank you again for your answer!o(〃^▽^〃)o

    i_was_hereMay 19, 2023

    @SG_161222 I am curious about how did you merge models... I am just a newbie here, so it would be great if you could also explain what exactly mearging models mean?😅

    SG_161222
    Author
    May 25, 2023· 1 reaction

    @i_was_here For the model merge, I used the "Checkpoint Merger" tab in Automatic1111.

    I started the merge with the Interpolation Method "Add Difference". Example: Model_A + (Model_B - Model_A) * M. The value of the Multiplier I took starting from 0.5 and ending about 0.2.

    I ended up with the "Weighted Sum" method. Everything is easy here. Model_A (1 - M) + Model_B * M. The value of the Multiplier in this method I took 0.2 and below.

    Write down in the table what merges you made and with what multiplier. I hope this information helped you and sorry for the delay in replying :)

    AlanLeeBRMay 18, 2023
    CivitAI

    Love your models, can help me to create my realistic model?!

    i_was_hereMay 19, 2023

    Same here... It would be v helpful if you could give some more details about the dataset of images you used and other parameters during the training...

    grfragazi91867May 25, 2023
    CivitAI

    This model doesnt work well with loras?

    KJ17May 25, 2023· 1 reaction
    CivitAI

    In which folder should I put the downloaded file?

    SG_161222
    Author
    May 25, 2023· 3 reactions

    Hi! If you use Automatic1111, then: ...\Automatic1111\models\Stable-diffusion\

    hamidnuriMay 25, 2023
    CivitAI

    Hey!🛑 Does Realistic Vision 2.0 works with Stable Diffusion v2.1 or it is exclusively for SD v1.5??

    Also, what is the difference bwtween V2.0 and V2.0 inpaint??

    V_VeeveeMay 26, 2023· 2 reactions

    While I can't answer your first question, the inpainting model is going to be the one you'll want to use when inpainting, (and maybe img2img? I do). So generate the image with the regular checkpoint, but if you are making any changes to an image, switch to the inpainting checkpoint. Use a model made for inpainting is the difference between "Inpainting doesn't work very well" and "Wow. This does exactly what I need it to... usually."

    hamidnuriMay 28, 2023

    @Toto_in_Kansas thank you very much

    fghxxxMay 26, 2023· 1 reaction
    CivitAI

    I might be a bit hazy, but I cannot see the YAML config file for the V1.3-inpainting model. Can anyone show me where is it?

    lightray999627May 27, 2023
    CivitAI

    Is anyone getting green subtle marks from time to time. Like a very faint green mark on their images. It's uncommon but it can noticeably happen.

    SG_161222
    Author
    May 28, 2023

    Hi! If you are using the version 2.0 model, you need to use the VAE which I put a link to in the model description (https://huggingface.co/stabilityai/sd-vae-ft-mse-original).

    hamidnuriMay 28, 2023· 2 reactions
    CivitAI

    I've used Realistic Vision v2.0 in Hugging Face for a day and then it keeps showing this error:

    call() got an unexpected keyword argument 'init_image'

    What can I do?

    AIrty314159May 29, 2023· 4 reactions
    CivitAI

    Thank you for your contribution to the AI community

    ginlucks962May 31, 2023· 2 reactions
    CivitAI

    Exactly what it means:

    "For version 2.0 it is recommended to use with VAE"

    Once downloaded stabilityai/sd-vae-ft-mse-original where I have to put the file and how I can use them both (I mean realistic vision and VAE). I am a little bit confused..

    SG_161222
    Author
    May 31, 2023· 3 reactions

    Hi!

    If you use Automatic1111 then you need to put this file in the following path: ...\Automatic111111\models\VAE\

    Next, you need to run Automatic1111 and go to "Settings" -> "Stable Diffusion" and select the VAE file from the "SD VAE" dropdown list.

    I hope this helps you! I'm also going to bake a VAE into this model soon and upload it here.

    ginlucks962May 31, 2023

    @SG_161222 Thanks! 

    emiraydin84489Jun 2, 2023

    i am kind a new here... so i ask very naif question. what is the VEA? pls this is a real qestion.

    wideloadJun 4, 2023

    @emiraydin84489 A VAE is a file that is used along with a model/checkpoint. It can add more colour, contrast, vibrancy and so on to the final image. For example, if your image looks washed out then a VAE can fix it, It's definitely worth having a couple for when they're needed.

    alximistJun 12, 2023

    @SG_161222 please bake it in!!! ;)

    RevengeUAJun 1, 2023· 4 reactions
    CivitAI

    Best photo model!

    SG_161222
    Author
    Jun 1, 2023

    Thank you! :)

    openfireJun 4, 2023

    Though compare vs Dreamlike

    but yeah both are excellent tuned 8 o clock reconstruction models, with different paths

    both are pretty excellent for everything POV related

    Dreamlike is overall a tad more unique maybe from it's Genetical Base but its a integral part here maybe even its driving core

    Would be very interesting to see what happens if Dreamlike is left away from the merge

    Would it fall apart entirely ?

    emiraydin84489Jun 2, 2023· 7 reactions
    CivitAI

    i am kind a new here... so i ask very naif question. what is the VEA? pls this is a real qestion.

    SG_161222
    Author
    Jun 2, 2023· 4 reactions
    foqatitu381Jun 18, 2023· 2 reactions

    vae is neural network that transforms the input & sends it into the 'latent space' where it gets feeded into clip then gets mapped by the unet (still in the latent space) & then again receives the sampled input from the latent space, encodes it back and produces the result. Remember a model has 3 components vae, clip & unet.

    Wobbling7259Jun 9, 2023· 3 reactions
    CivitAI

    What does "noVAE" mean for v2?

    SG_161222
    Author
    Jun 9, 2023· 1 reaction

    Hi! "No VAE" means that the VAE file I attached a link to in the model description was not baked into the model, that is, it must be used with the model separately.

    Wobbling7259Jun 10, 2023

    thank you

    dre1906401Jun 12, 2023· 1 reaction

    how does that work with an app like Draw Things

    Lyahn_hereJun 10, 2023· 25 reactions
    CivitAI

    Works well with "draw things" on M1 Mac, created some fabulous gay porn scenes.

    elloko83Jun 10, 2023· 4 reactions
    CivitAI

    facing an issue with generation of images using realistic vision 2.0. There is little to no variation in the look of a faces fo re.g. "photo of a woman". All face are lookin very similar, just minor variation due to age or nationality. Anyone had similar problems?

    lutherjonna409Jun 18, 2023· 2 reactions

    keep changing the ethnicity and specify ages like - 24, 28, 32 year old

    DrakeniJun 29, 2023· 1 reaction

    That's how SD models work. If you don't change your prompt, you'll get very similar looking people.

    alvinkalacakra505Jun 29, 2023

    @redmagechris2916 not all if you use LORA the face will look different,, because LORA like DollLikeness uses the same dataset of photos of people, if you don't use LORA maybe it will help.

    PlasmaMonkeyJun 20, 2023
    CivitAI

    What is this trained on

    Checkpoint
    SD 1.5

    Details

    Downloads
    223,792
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/26/2023
    Updated
    5/12/2026
    Deleted
    -
    Trigger Words:
    analog style
    modelshoot style
    nsfw
    nudity

    Files

    realisticVisionV60B1_v20Novae.safetensors

    Mirrors

    HuggingFace (50 mirrors)

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.