CivArchive
    RealEldenApocalypse_AnalogSexKnoll_4CandyPureSimp+FEET - RealEldenApocalypse_AnalogSexKnoll_4CandyPureSimp+FEET
    NSFW
    Preview 15909
    Preview 15907
    Preview 15906
    Preview 15905
    Preview 15904
    Preview 15903
    Preview 15902
    Preview 15901
    Preview 15900
    Preview 15897
    Preview 15896
    Preview 15958

    PLEASE READ DESCRIPTION

    *update*: I do love REA, but I consider it to be an outdated version of this new baby: https://civarchive.com/models/3950/art-and-eros-aeros-a-tribute-to-beauty

    As such, it will have no new development. I will keep it here should anybody have a need for it.

    Check any info or questions at our private Discord here: https://discord.gg/z88HpDwbGq

    it needs vae-ft-mse-840000-ema-pruned (or another one if you want to get experimental) or it will output broken images.

    Hello everybody!! I proudly present you my merged NSFW model I have been working for weeks to try and get the most emergent properties within the models contained. SD 1.5 is the base most of the models contained used, with exceptions. The name is a meme and it references all that is inside of it.

    So let me thank everybody who made it possible, because I if there's anything good on it is because the source material was great too. So my thanks in no particular order to Hassan, AloeVera, the CivitAI Team, Izuek, Someone88, wavymulder, UnstableDiffusion Team and any other creator I might not have been able to cite.

    ABOUT:

    First it may not be the best begginer checkpoint out there. I consider myself experienced at prompting so I haven't tried much basic prompts, however I doubt that a plain "pretty naked woman, big boobs" is going to take you very far with it. However, although I prompt a bit different than that I have tested that it understands the words from the language of PhotoReal v0.5 (it is contained within the merge), so if you are having troubles getting good outputs from it you can begin from there, but in general, try to use a more natural language than an array of commas: https://docs.google.com/document/d/1-DDIHVbsYfynTp_rsKLu4b2tSQgxtO5F6pNsNla12k0/edit

    There's also one screenshot with a very basic example prompt for you to get the idea.

    According to my testing it is a very powerful model for it's porpouse, but it is a project I made for myself which means that depending on what you want it may or may not lack certain areas. It has been intensively tested to generate photorealistic(ish) images of different types of girls in different poses in different places wearing different things with different artistic moods. Nothing more, nothing less. From hardcore, to to group, to drawing, are out of the scope uses that may or may not work.

    HOW TO USE IT:

    • You MUST USE vae-ft-mse-840000-ema-pruned (or experimentally other VAEs). Otherwise it breaks.

    • Some users report having problems using something different than Automatic1111webui. Cannot troubleshoot that myself.

    • This model DOES NOT REQUIRE TO USE TRIGGER WORDS.

    • Works for a wide variety of steps from 20 to 130 tested. I use 128.

    • It works on it's own at making general or NSFW images of ladies of high quality.

    • Resolutions tested are 512x512, 384x704, 512x768 and 768x768 (the latest is a bit more buggy but decent enough)

    • Trigger words are general style modifiers. They can be used alone or in combination and will give an special mood to the composition.

    • Trigger words have only been tested using them at the beggining of the prompt.

    • If used together in any subset combination they work better if they appear in this relative order: "elden ring style postapocalypse knollingcase analog style bf" or "postapocalypse elden ring style knollingcase analog style bf".

    MERGED:

    I haven't kept track of all the steps done in the merge (I used a weird methodology), but this is what's inside in different proportions:

    • PhotoReal v0.5: https://docs.google.com/document/d/1-DDIHVbsYfynTp_rsKLu4b2tSQgxtO5F6pNsNla12k0/edit

    • Elden Ring Style: https://civarchive.com/models/5/elden-ring-style

    • Postapocalypse: https://civarchive.com/models/1136/postapocalypse

    • Analog Diffusion: https://civarchive.com/models/1265/analog-diffusion

    • SXD: https://civarchive.com/models/1169/sxd

    • Knollingcase: https://civarchive.com/models/1092/knollingcase

    • Hassan's 1.4 and CandyBerry: https://civarchive.com/models/1173/hassanblend-all-versions

    • PurePornPlus: https://civarchive.com/models/1235/purepornplus-merge

    • SimpMaker 3K1: https://civarchive.com/models/1258/aloeveras-simpmaker-3k-series

    • One ancient feet model that can be grabbed from some repositories (the only one I know of is too dubious to link it).

    • It also works GREAT as an extra style modifier with this hypernetwork for extra artistic outputs: https://civarchive.com/models/1141/mjv4-hypernetwork

    TROUBLESHOOT:

    • Images are a chaotic trip of LSD colors:

    You are not using the VAE.

    • Images seem like real images but are a mess of body horror and whatnot:

    You need to keep working in your prompt. This is NOT an easy to use model (not rocket science either).

    • Model gives error when trying to launch.

    I have totally no idea of what can be the problem, I am not a software developer, just a somewhat experienced user. However @Technerd has shared with us this info for an specific problem:

    Technerd - "Python Error: Key Error: 'state_dict' pops up no images generated. Cannot get it to work with NMKD 1.8 GUI".

    Technerd - "Just to let you know I've downloaded your new "safetensors" version and converted it via NMKD GUI to a "ckpt" file and now it works".

    FUTURE:

    This project is considered finished. From now on it is going to become my base model. I may start to train it as the big kids do. I may create a new merge in a future using it too. The sky is the limit, wanna be updated?

    https://linktr.ee/ainecaptain

    Description

    FAQ

    Comments (64)

    photogbillDec 20, 2022· 1 reaction
    CivitAI
    Getting poor results, nothing is coherent or clear. IS there a yml or something that i'm missing?
    aine_captain
    Author
    Dec 20, 2022
    It has just been discovered that it needs the SD 1.5 VAE to be active (or the anythingv3/NAI) otherwise it gives broken colors, I have just updated the description.
    neurosexualDec 20, 2022
    @Aine, please upload at least one picture as PNG to see and understand prompt design otherwise not very convinient
    I used the vae-ft-mse-840000-ema-pruned for the images on my review and it didn't work
    aine_captain
    Author
    Dec 21, 2022
    I have added an screenshot with a basic prompt for you to try out
    NowhereManGoJan 2, 2023

    If you are using automatic1111, go to setting and change option for "SD VAE"

    ohthehuemanateeDec 21, 2022· 1 reaction
    CivitAI
    All I get is "AttributeError: 'NoneType' object has no attribute 'pop'"
    phobiastrikeDec 21, 2022
    Same here
    aine_captain
    Author
    Dec 21, 2022
    I totally don't know what that can be related with... I have updated it to be a safetensor file instead, can you try it? But if it doesn't work, I am no programmer. I am just an user of this tech too :S
    ohthehuemanateeDec 21, 2022
    Aine Captain I tried the file. It loads but the pictures it creates are basic red splotches sort of in the shape of people. Any suggestions?
    sweety123Dec 21, 2022· 1 reaction
    CivitAI
    Instructions?
    phobiastrikeDec 21, 2022· 1 reaction
    CivitAI
    Can u give us better instruction? It's not workin even with VAE and model that u wrote up
    aine_captain
    Author
    Dec 21, 2022· 1 reaction
    there is a screenshoot with my config and the VAE, if they don't work I don't know what else to do, because it is working for others and myself, and sadly I am not a programmer, I don't know what can be happening in your setup.
    phobiastrikeDec 21, 2022
    Pls tell where I can get stability VAE and how correctly push it, maybe I was wrong
    phobiastrikeDec 21, 2022· 1 reaction
    Oh man, I understand, great model ty!
    technerdDec 21, 2022· 1 reaction
    CivitAI
    Python Error: Key Error: 'state_dict' pops up no images generated. Cannot get it to work with NMKD 1.8 GUI.
    aine_captain
    Author
    Dec 21, 2022
    I use Automatic1111, do not know anything about anything else.
    technerdDec 21, 2022
    ah okay. Just to let you know I've downloaded your new "safetensors" version and converted it via NMKD GUI to a "ckpt" file and now it works.
    aine_captain
    Author
    Dec 21, 2022· 1 reaction
    Since you have spoken a language from another realm of reality, if you don't mind, I will copy paste your info in the description :) Thank you very much!!

    How do you convert it???\

    technerdJan 2, 2023

    @lambear92 use NMKD GUI to convert, it has inbuilt tools for that: https://nmkd.itch.io/t2i-gui

    ohthehuemanateeDec 21, 2022· 1 reaction
    CivitAI
    I got it to load but all it does is create red distorted pictures. Anyone have any suggestions? https://imgur.com/a/VBopXhh
    aine_captain
    Author
    Dec 21, 2022· 3 reactions
    VAE. First thing highlighted.
    ohthehuemanateeDec 21, 2022
    I have VAE. How do I use it with this?
    aine_captain
    Author
    Dec 21, 2022
    If you are using automatic you just need to load it in the settings. If not I think you need to rename the vae to the model's name and make it's name to end in .vae.pt or something like this... I just select it on the vae tab from automatic after having putted it in the proper folder.
    gar_ylJan 1, 2023

    I was getting those even with the default vae. I switched to the Anythingv3.0.vae and that worked great.

    NowhereManGoJan 2, 2023

    If you are using automatic1111, go to setting and change option for "SD VAE"

    bestjammerJan 3, 2023

    Where do you download the vae? (I'm dumb with this clearly)

    meritrash6350Dec 21, 2022· 1 reaction
    CivitAI
    New to VAE - I've not used a VAE before. Following the linked instructions, I've put a folder in my /models/stable-diffusion called /realelden In that folder, I've got: RealEldenMix.safetensors RealEldenMix.pt (which is the renamed vae-ft-mse-840000-ema-pruned.ckpt) Without doing any extra config, I'm getting the trippy LSD images when using your sample prompt. What am I missing?
    meritrash6350Dec 21, 2022· 4 reactions
    OK, so it looks like I worked it out. For those having problems, what I had wrong is that I should have named my VAE file to RealEldenMix.vae.pt. Looks like the .vae matters. Once I did that I started making normal images.
    aine_captain
    Author
    Dec 21, 2022
    Now automatic has a folder where you can drop the VAE files and sellect them from the settings tab (if you are using automatic, it is just a convenience option)
    meritrash6350Dec 21, 2022· 2 reactions
    I actually do use Automatic1111, but fact of the matter is that if I just need this VAE to lie up always with this ckpt (and the ckpt is useless without that specific VAE) I may as well just put the stupid things together under the same name. That way they're always associated and I can't miss a step in the GUI.
    gx_ground136Dec 22, 2022· 2 reactions
    CivitAI
    just download vue and add vue in the setting window, many people don't know , or you will get red image
    cooperdkDec 22, 2022· 1 reaction
    CivitAI
    You write that "it understands the language of PhotoReal v0.5". But the language of PhotoReal is exactly NOT a long sentence, but short, comma-separated, specifiers sorted in 20 categories, so you missed something there. You refer to the manual yourself. I think your way increases the amount of errors, resulting in a harder to work with model. Try doing what the Unstable Tagging whitepaper suggests: "Professional photo, low angle shot, full body, topless, young woman, latina, squatting, on grass, legs spread wide, petite body, large breasts, puffy nipples, genital piercing, nipple piercing, smiling, shaved pubic hair, small labia, visible genitals, medium ass". I didn't test it yet. But I am going to shortly.
    aine_captain
    Author
    Dec 22, 2022
    It understands the words from PhotoReal, you can specify angle, type of body, position, place, etc... and it totally works with comma-separated prompts according to many users, but that's not what I tested it for. Also PhotoReal understands my long sentences prompts, so I'd say that we speak a close enough language... About other recommendations, sure, the best for people is to experiment. About my way of prompting, I am sure about it working for me across models.
    traletDec 24, 2022· 2 reactions
    CivitAI

    thank you for your working and sharing, this model is really good. could you tell me the prompts of your title images(red hair)?

    aine_captain
    Author
    Dec 25, 2022

    I am sorry, but I don't usually share my prompts, they are done in natural language that's for sure. I barely use any commas.

    gigsaDec 27, 2022· 1 reaction
    CivitAI

    Could you please provide more info on how to find the feet model you used in this? I noticed your model does feet way better than others and can only imagine it's due to that part

    aine_captain
    Author
    Jan 7, 2023

    Check aEros it does feet as well as this model or better and it does not use that model anymore (that model had big problems in order to merge it and is responsible of some of the abominations REA can make).

    If it is a big need for you reach me in discord deviant art or patreon and I'll link it to you in private the directory where I grabbed it. Sorry the late answer, didn't see you.

    mrbearDec 29, 2022· 1 reaction
    CivitAI

    Is nudity the limit or can it produce full hardcore images?

    hoblinJan 5, 2023· 1 reaction

    Yes you can but that's not easy. Just experiment with prompting.

    aladriehlJan 1, 2023· 1 reaction
    CivitAI

    can you make a .ckpt version?

    stokesmatthew1112562Jan 2, 2023· 1 reaction
    CivitAI

    Great work just getting into ai image generation, but only problem i see im having is sometimes i get just a random completely black image nothing bad its all working so far. just had to download the file then put it in my model folder and thats it, i did use your screenshot for settings and it turned out the same. but doing batch sizes of 4 and batch count of 11 out of all them images it produced it gave me 1 black image nothing on it at all. i have see a few pop up but nothing major it seems like. over alll great work.

    phranqJan 2, 2023· 1 reaction
    CivitAI

    Would need prompt examples to have a result as good as yours OP

    brickcity479Jan 4, 2023· 3 reactions
    CivitAI

    FIXED - Read comment

    Hey there! I use the VAE 840000 with other checkpoints but I'm not having any luck getting it to work with your checkpoint. I tried both the ckpt and the safetensor VAE version, yet I'm still getting red images. What information can I provide that may help me diagnose the issue?

    Intel i9-12000

    3080ti

    brickcity479Jan 4, 2023· 1 reaction

    Ok, I found that I had to go into settings and select the VAE 1.5 manually. I've saved settings and got a prompt to work.

    The strange thing is that it only lists the vae ckpt, and not the safetensors version. But that's probably just user error on my part.

    I hope this helps others!

    LokitsarJan 8, 2023

    Thank you! I couldn't figure this out.

    vtrJan 16, 2023· 5 reactions
    CivitAI

    Why it not woking for me? I see only red and ping distortion.

    aine_captain
    Author
    Jan 16, 2023

    It is the VAE problem, you need to install SD VAE and make sure Automatic1111 is not configured in auto mode for the VAE

    joparebrJan 16, 2023· 4 reactions

    Download vae-ft-mse-840000-ema-pruned.ckpt

    go to your auto1111 instalation folder and put it on \models\VAE folder

    On Automatic1111 go to settings, click StableDifussion on the left menu, SD VAE drop down menu choose vae-ft-mse-840000-ema-pruned, apply settings.

    vtrJan 17, 2023

    Thx, i will try it.

    ZyganyFeb 9, 2023

    @joparebr I downloaded the file you linked to and added it to a1111\stable-diffusion-webui\models\VAE folder, but when I refresh the SD VAE dropdown list, the file doesn't appear. The only choices are Automatic and None. Any idea how to fix this?

    reneleyvaiaJan 16, 2023· 1 reaction
    CivitAI

    Hello, as much as I try to make it work, following the indications it sends me this error.

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)

    aine_captain
    Author
    Jan 16, 2023

    I have never heard about that?

    Are you using automatic?

    Do other models work on the same session this one doesn't?

    Restarting the session doesn't fix it?

    Can you redownload it please, it could have been a corrupted download...

    I am not a software dev, but I'd say it seems like something else than the model... But all I know is pressing buttons at automatic1111's sorry

    kaxfJan 18, 2023· 1 reaction
    CivitAI

    I'm getting this error when loading the model. Help please~

    Loading weights [7c7dfbd636] from D:\Games\Mods\Resources\Diffusion\models\Stable-diffusion\realeldenapocalypse_realeldenapocalypse.safetensors

    changing setting sd_model_checkpoint to realeldenapocalypse_realeldenapocalypse.safetensors: RuntimeError

    Traceback (most recent call last):

    File "D:\Games\Mods\Resources\Diffusion\modules\shared.py", line 533, in set

    self.data_labels[key].onchange()

    File "D:\Games\Mods\Resources\Diffusion\modules\call_queue.py", line 15, in f

    res = func(*args, **kwargs)

    File "D:\Games\Mods\Resources\Diffusion\webui.py", line 84, in <lambda>

    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))

    File "D:\Games\Mods\Resources\Diffusion\modules\sd_models.py", line 428, in reload_model_weights

    load_model(checkpoint_info)

    File "D:\Games\Mods\Resources\Diffusion\modules\sd_models.py", line 385, in load_model

    load_model_weights(sd_model, checkpoint_info)

    File "D:\Games\Mods\Resources\Diffusion\modules\sd_models.py", line 241, in load_model_weights

    model.load_state_dict(sd, strict=False)

    File "D:\Games\Mods\Resources\Diffusion\venv\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict

    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

    RuntimeError: Error(s) in loading state_dict for LatentDiffusion:

    size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.input_blocks.1.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.input_blocks.2.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.input_blocks.2.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 1024]).

    size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).

    size mismatch for model.diffusion_model.output_blocks.6.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.output_blocks.6.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.output_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.output_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.output_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 1024]).

    size mismatch for model.diffusion_model.output_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).

    size mismatch for model.diffusion_model.output_blocks.9.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.output_blocks.9.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.output_blocks.10.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.output_blocks.10.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.output_blocks.11.1.proj_in.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

    size mismatch for model.diffusion_model.output_blocks.11.1.proj_out.weight: copying a param with shape torch.Size([320, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 320]).

    aine_captain
    Author
    Jan 25, 2023

    I totally don't know what's going on, sorry. The model has nothing special that shouldn't work the same way anyother model besides the VAE thing

    ritcher1Jan 22, 2023· 2 reactions
    CivitAI

    At least 1 prompt, for the first original render ? It would be a good starting point for experimenting.

    Holdz666Feb 9, 2023· 1 reaction
    CivitAI

    Can anyone make a google colab link with this model next to the vae file, I have no idea how to do this.

    h_uhFeb 13, 2023

    create a folder on ur google drive and name it "medels" , put ur models there , and for vae ,, after the instalation of SD , u will find a folder for sd , then u will find "sd webui" , after that u can see mode models folder , inside u can find vae folder , put them there

    HornetFeb 19, 2023· 2 reactions
    CivitAI

    People, the author in the description said that one user combined the model! I did the same. In the SD itself, in the Merge tab, select the downloaded model the same, twice (A, B). Below in the drop-down window, you just need to select VAE, it should also be downloaded from you. Click Merge and everything works!)

    calvinMar 1, 2023· 1 reaction
    CivitAI

    This model has the same hash and results as RealEldenApocalypseA (7c7dfbd636), was it supposed to be different?

    aine_captain
    Author
    Mar 1, 2023

    As far as I know there's only one version of REA...

    calvinMar 1, 2023

    @aine_captain Ok, I must have just shortened the extended name when downloading initially