CivArchive
    Preview 901149
    Preview 966899

    DreamShaper - V∞!

    Please check out my other base models, including SDXL ones!

    Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
    Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee
    🎟️ Commissions on Ko-Fi

    Join my Discord Server

    For LCM read the version description.

    Available on the following websites with GPU acceleration:


    Live demo available on HuggingFace (CPU is slow but free).

    New Negative Embedding for this: Bad Dream.

    Message from the author

    Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.

    DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
    With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.

    Not before one. Last. Push.

    And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.

    PS: the primary goal is still towards art and illustrations. Being good at everything comes second.


    Suggested settings:
    - I had CLIP skip 2 on some pics, the model works with that too.
    - I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
    - All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
    - I don't use "restore faces".

    For old versions:
    - Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
    -- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
    -- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
    -- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).

    LCM

    Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.

    Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).

    Comparison with V7 LCM https://civarchive.com/posts/951513

    NOTES

    • Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!

    • Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.

    • Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.

    • Version 5 is the best at photorealism and has noise offset.

    • Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.

    • V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.

    • Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).

    • I get no money from any generative service, but you can buy me a coffee.

    • You should use 3.32 for mixing, so the clip error doesn't spread.

    • Inpainting models are only for inpaint and outpaint, not txt2img or mixing.

    Original v1 description:
    After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.

    I hope you'll enjoy it as much as I do.

    Official HF repository: https://huggingface.co/Lykon/DreamShaper

    Description

    FAQ

    Comments (18)

    ktiseos_nyxMay 25, 2023· 5 reactions
    CivitAI

    I have to say this plainly: Lykon's models inspire me to keep going. Dreamshaper inspires me to get THAT good at making stuff - AnyLora and Anime Pastel Dream inspired me to make Kilkenny MIx

    Lykon
    Author
    May 25, 2023

    Love you man

    aiwayfarerMay 25, 2023
    CivitAI

    Really great model! A bit curious though, I tried pretty much copying the prompts for the first preview image but changed some parts of the prompt to another character. Now, I notice that it's really inconsistent in trying to achieve 'red' eye color. When it does turn red, somehow after the hires.fix process the eye colour usually turns into a kind of orange colour or even something else entirely(I notice a lot of times it really likes to turn blue, a lot like the colour of the eyes of your preview image).

    I'd just like to ask if you knew about something like this, or if you could help me understand how the hires.fix process usually work? Does it get influenced by the prompts as well? or does it only take what's already in the generated image and enhance upon it?

    Still an amazing model and really appreciate what you do for the community <3

    Lykon
    Author
    May 25, 2023

    never had any problem in making red eyed characters. Might be some other keyword interfering or the negative embedding being too strong.

    kurdMay 25, 2023· 1 reaction
    CivitAI

    what is that ENSD? i cant recreate the first image, it only does closeups, so i think it must be that or something to do with sampler. thought removing "portraits of" makes it much more similar

    Lykon
    Author
    May 25, 2023

    it's a setting that alters the random noise generation. It's probably not the only one you're missing, I suggest you check the comments on the first image.

    kurdMay 28, 2023

    it was the deterministic setting, but how do i change that ensd? or is it just random and useless to have a specific one?

    Lykon
    Author
    May 28, 2023

    @kurd also another setting.

    mariodianMay 31, 2023

    @kurd look for Noise Seed Delta in Automatic1111. Different ENSD gives different results.

    fulforget85886May 31, 2023

    Also, increase your batch count. If you run 1 with the seed, the one in the post could be the 10th one generated.

    Lykon
    Author
    May 25, 2023· 8 reactions
    CivitAI

    in case you're having trouble reproducing some of the results, you might have to change these 2 settings:
    - "Do not make DPM++ SDE deterministic across different batch sizes."
    - Set the ETA Noise Seed Delta (ENSD) to 31337

    3DAscensionMay 28, 2023

    Where can we download model: DreamShaper_6_beta2_poc2 ???

    Lykon
    Author
    May 28, 2023

    @3DAscension it's just the name it had before release. It's the same model and hash as the v6 bakedvae you see here (fp16 not pruned, but the pruned one will give you the same results).

    3DAscensionMay 29, 2023

    @Lykon O okay thank!!

    tmack3Jun 15, 2023

    What does changing the ETA Noise Seed Delta actually do? From what I've read it just changes the random number used to generate the noise but if its already random does it make a difference changing it?

    Lykon
    Author
    Jun 15, 2023

    @tmack3 it's just for reproducibility, you don't need it.

    DominoPrincipMay 25, 2023
    CivitAI

    I love your models and am toying with the idea of maybe using one in a later version of some of my own, you mentioned that - You should use 3.32 for mixing, so the clip error doesn't spread. - I'm not as knowledgeable as I could be, could you explain what the clip error is so I can keep an eye out for it?

    And thanks for all your work, you guys doing all the training keep everything here fresh, hopefully I can do my part soon too, cheers.

    Lykon
    Author
    May 25, 2023

    this is based on the clip fixed version.