CivArchive
    Eris - Eris v1
    NSFW
    Preview 288376
    Preview 288395
    Preview 288393
    Preview 288386
    Preview 288378
    Preview 288377

    Eris, the Greek goddess of strife and discord.

    This is my first pure 3D render/Cartoon model. When I started working on this model I had almost zero knowledge on writing prompts for 3d renders/CGI. As such, my main goal for this model was quickly realized - Make a model that accepts photorealistic prompts and returns beautiful 3D rendered outputs. Normal 3D render/CGI prompts also work with this model.

    Important Settings

    The following settings assume you're using webui by Automatic1111

    • USE CFG VALUES BELLOW 8*

      • This model does not respond well to CFG values above 8. With some caveats that will be explained bellow.

      • Do not go over 6 CFG with Euler a unless,

      • You are using Hires Fix. I found that you can go as high as 10 CFG with hires. fix enabled. This applies to all samplers.

      • DPM++ SDE Karras starts to degrade around 7 CFG if using clip skip = 1, no hires fix. However, if you're using DPM++ SDE Karras with clip skip = 2 the CFG values can go up to at least 10 CFG without significant (if any) degradation.

      • General Advice:

        • Do Not exceed 6 CFG with Euler a, unless using hires fix.

        • Do Not exceed 7 CFG with DPM++ SDE Karras with clip skip set to 1, unless using hires fix.

        • You Can go up to at least 10 CFG with DPM++ SDE Karras with clip skip set to 2, without using hires fix.

        • I could not test all samplers. If you notice broken compositions or distortions; lower your CFG value and/or enable hires fix. My own personal settings for hires fix can be found bellow.

      • Thanks to Duo V2 on the Unstable Diffusion discord for helping me get to the bottom of this issue.

    • VAE: vae-ft-mse-840000-ema-pruned.safetensors

    • Clip skip : 2

      • This varies. Most of the time Clip skip = 2 works best with this model. In the event you're not getting the results you want. First, try Clip skip = 1.

    • ETA Noise Seed Delta: 31337

      • Found in - Settings -> Sampler parameters -> Eta noise seed delta

    • Sigma Noise: 1

    • Sigma Churn: 0 or 1

      • Optional - When using Euler, Heun, DPM2 samplers try setting Sigma Churn = 1

      • Found in - Settings -> Sampler parameters -> sigma churn

    • Optional Negative Textual Inversion (TI): bad_prompt_version2

    • Optional Recommended Samplers:

      • DPM++ SDE Karras // 18-35 steps

      • Euler // 40 - 80 steps // Sigma Churn : 1

      • DPM++ 2M Karras // 40 - 70 steps

      • DPM++ 2S a Karras // 30 - 75 steps

      • Heun // 20 - 30 steps // Sigma Churn : 1

      • DPM2 // 30 - 75 steps // Sigma Churn : 1

      • These are just recommendations; feel free to experiment with different values and samplers.

    • Optional Hires. Fix settings (these are my go-to parameters):

      • Upscaler : R-ESRGAN 4x+

      • Denoising strength: 0.3 - 0.35

      • The rest of the settings are up to you. They don't really impact the quality of the output.

      • Note: Hires Steps, if you're using high steps for SD - You can choose a hires step value lower than your SD step value. This will increase the generation times.

        • For example; Let's say you're using Euler sampler @ 60 steps. You can then set the Hires Steps to ~40. Apposed to the default which is 0. Hires Steps = 0 means it will follow the same step value as sampler (Euler in this case). Lower Hires Step values will reduce generation time.

    • Optional Face Restoration:

      • I use CODEFORMER @ 0.8 strength. Whether or not you use face restoration is up to you.

    • Optional Quick Settings:

      • For easier access to some of the settings in webui, you can move some settings sliders/dropdowns to the main interface (beside model selection).

      • Quick settings location -

        • Settings -> User interface -> Quicksettings list

      • My Quick settings -

        • sd_model_checkpoint, CLIP_stop_at_last_layers, s_churn

        • Where:

          • CLIP_stop_at_last_layers is Clip Skip

          • s_churn is sigma churn

    PDF documentation is in the works.


    Check out my other models

    SDXL

    SD1.5

    LoRA

    Questions or Feedback?

    Visit my thread on the Unstable Diffusion Discord Server

    Description

    Initial Release

    What's included? [Original Name | What Civitai will rename them]

    • Eris_v1.safetensors | eris_v1.safetensors

    • Eris_v1.ckpt | eris_v1c.ckpt

    • Eris_v1-fp16.safetensors | eris_v1(1).safetensors

    This model is mainly a trained model. Initial model used for training is made up of the following merge:

    FAQ

    Comments (16)

    MirabilisMar 20, 2023· 1 reaction
    CivitAI

    Nice detailed write up going on there. GJ. Looking forward to digging into this later on.

    ritcher1Mar 20, 2023· 2 reactions
    CivitAI

    Great model as all the previous. Can you also add the 2gb safer safetensor format ?

    ndimensional
    Author
    Mar 20, 2023

    The 3.95GB version in the dropdown (fp16 version) was as low as I could get the safetensor version without pruning. I'm not entirely sure why the ckpt version halved to 2GB and the safetensor didn't. I'll look into it 👍
    Thanks for brining this to my attention.

    DreamExplorerMar 20, 2023
    CivitAI

    Interesting.. can this do males?

    ndimensional
    Author
    Mar 20, 2023· 1 reaction

    You might need attention/emphasis; (male:1.2), ect. But it should be capable of generating male images.

    AglMar 20, 2023· 1 reaction
    CivitAI

    This generates corrupted images. I assume that it will work well after changing the specified settings. But I will not make them, because I do not want to spoil the work of the other checkpoints.

    ndimensional
    Author
    Mar 21, 2023· 1 reaction

    Discovered the issue. It has to do with CFG values. Updated the models description to to reflect these discoveries. No settings should require being changed. Just simply lowering CFG values or choosing a different sampler.

    Balthazar99Mar 21, 2023· 1 reaction
    CivitAI

    Hi ndimensional, was reading the description for this, about the goal being for it to generate beautiful CGI / Render style images. If you had to ask me to pick a model that does that, I would go to one of my favorite models ever.... Experience... made by you. :-) If someone were to ask you to differentiate or break down how you'd compare Experience to Eris, what would you say?

    Really gotta make a second paragraph to emphasize.... dude I love Experience (normal & Realistic!), it's so good!

    ndimensional
    Author
    Mar 21, 2023· 2 reactions

    Experience is pure Americana.
    Eris is a cute & creative Japanese girl that moved to America and plays western RPGS.

    I'm not sure if you wanted a more technical comparison (which I can do if needed), but I find it hard to quantify the subtle nuances with technicality.

    A few technical notes:

    1.) Eris was trained with Noise Offset, Experience was not. Meaning overall, Eris is going to have greater perceived fidelity.

    2.) Experience was trained on batch processed HDR tone mapped images, Eris was not. This is what gives Experience it's HDR-esque shine. The caveat being, Eris somewhat makes up for this with the previously mentioned Noise Offset.

    3.) Eris is a bit more complex, Experience is not. You can read through the description of this model to see all the oddities it brings lol.

    4.) Experience (as of v7.5) has some autoencoder issues and doesn't always respond well to user prompts. Eris doesn't seem to have this issue.

    5.) Eris generates better hands, simple as.

    Now, that might sound like Eris is objectively better than Experience but that leads me back to not being able to quantify the subtle nuances with technicality. It really comes down to what kind of images you're trying to generate and which model responds the best to your prompts. General rule of thumb (not solid advice at all), if you're going for CGI/3D renders - Eris is probably the better model, as that's what it was trained for. If you try it and think experience did better, there's nothing wrong with switching back to experience.

    Thanks for the kind words. Since you like experience.. I plan on updating Experience in the near future. It's on the top of my radar. Just want to make sure the upgrade is an actual improvement because truth be told - experience is one of my favorite models as well.

    Hope this helped!

    Balthazar99Mar 21, 2023

    @ndimensional Thanks so much for the reply, yes this is exactly what I was looking for in terms of an explanation. I appreciate the short form and the more technical details as well. Thanks so much, keep up the amazing work! And I certainly appreciate your attention to detail with the logic behind updating Experience and your mindset with doing so.

    Balthazar99Mar 21, 2023

    @ndimensional I also just read all the details on the model here. Man, you really know your shit, I appreciate the technical details and can tell you've put the hours (and hours and hours) in. I appreciate you and your effort. Thanks so much. I'm hoping I'll be able to get such a sound grasp on this like that eventually.

    ndimensional
    Author
    Mar 21, 2023· 1 reaction

    @Balthazar99 No problem! Glad to help.

    ndimensional
    Author
    Mar 21, 2023· 1 reaction

    @Balthazar99 Thanks, I'm a deep learning engineer by trade so I have the benefit of working with the underlying methods/code of these models long before Stable Diffusion was created. If you're interested in the technical side of stable diffusion, there are some great resources online that go over the basics of Latent Diffusion Models (LDMs), Text Encoders (CLIP), and UNet blocks(type of Convolution Neural Network(CNN) used in stable diffusion). The basic principles behind all of this really boils down to mathematical theorem/algorithms (see, Markov Chain for an example of what "Steps" refer to).
    If you're looking to better understand Stable Diffusion but are less interested in the underlying principles, the above isn't really needed. In both cases, the best way to learn is to experiment and garner experience; eventually with enough time, things start to come naturally and the skills you acquired can be transferred over into over fields or programs. Good luck, and keep at it!

    sugarcookie69May 1, 2023
    CivitAI

    Great model !

    Can we have a colorful ver ?

    faketoyotaMay 5, 2023
    CivitAI

    Noise Offset makes this way too dark on average. Please leave Noise Offset out of checkpoints, we can use Lora and if in the future a better way of dealing with overall bright/dark is introduced, it will make baking in noise offset a bad idea.

    sugarcookie69Jul 1, 2023· 2 reactions
    CivitAI

    My fav model so far ! Do u have a plan to update this model .

    Checkpoint
    SD 1.5

    Details

    Downloads
    4,027
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/20/2023
    Updated
    5/11/2026
    Deleted
    -

    Files

    eris_V1.ckpt

    Mirrors

    CivitAI (1 mirrors)

    eris_V1.safetensors

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)