CivArchive
    Preview 1160197

    DreamShaper - V∞!

    Please check out my other base models, including SDXL ones!

    Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
    Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee
    🎟️ Commissions on Ko-Fi

    Join my Discord Server

    For LCM read the version description.

    Available on the following websites with GPU acceleration:


    Live demo available on HuggingFace (CPU is slow but free).

    New Negative Embedding for this: Bad Dream.

    Message from the author

    Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.

    DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
    With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.

    Not before one. Last. Push.

    And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.

    PS: the primary goal is still towards art and illustrations. Being good at everything comes second.


    Suggested settings:
    - I had CLIP skip 2 on some pics, the model works with that too.
    - I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
    - All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
    - I don't use "restore faces".

    For old versions:
    - Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
    -- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
    -- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
    -- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).

    LCM

    Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.

    Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).

    Comparison with V7 LCM https://civarchive.com/posts/951513

    NOTES

    • Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!

    • Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.

    • Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.

    • Version 5 is the best at photorealism and has noise offset.

    • Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.

    • V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.

    • Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).

    • I get no money from any generative service, but you can buy me a coffee.

    • You should use 3.32 for mixing, so the clip error doesn't spread.

    • Inpainting models are only for inpaint and outpaint, not txt2img or mixing.

    Original v1 description:
    After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.

    I hope you'll enjoy it as much as I do.

    Official HF repository: https://huggingface.co/Lykon/DreamShaper

    Description

    FAQ

    Comments (37)

    VonChenPlusJun 16, 2023
    CivitAI

    Very nice work, really like the model.

    Can you tell me how many images used for training?

    Lykon
    Author
    Jun 16, 2023

    of which version? Last one is basically cumulative so I might have lost count.

    VonChenPlusJun 16, 2023

    @Lykon Thank you for your reply. The last traceable version? What is the approximate order of magnitude?

    Lykon
    Author
    Jun 16, 2023· 1 reaction

    @VonChenPlus around 5k for the last version

    94655Jun 16, 2023
    CivitAI

    Is the diffusers version the same as a "regular" non-vae version? I've been waiting for a version of 6.31 that doesn't have the VAE baked in (and isn't an inpainting model) and I was wondering if this is it.

    Lykon
    Author
    Jun 16, 2023

    so first of all every ckpt has a vae, it must have a vae, if you remove it you get a grey picture since you can't encode/decode noise or images. BakedVae version means that the vae is selected manually by me instead of being the one from the original dreamshaper mix that I finetune. I now keep the label, but if you notice I stopped adding the "no vae" versions. Point is I might still upload those, because the "shitty vae" versions are sometimes better for training, as the good vae is very vibrant.

    Anyway diffusers always use the good vae, so from the baked vae version. In this case it's still SD vae, but I might switch to a better one in future versions

    94655Jun 17, 2023

    @Lykon Thanks for the detailed explanation. I tend to use the default vae that comes with SD (vae-ft-mse-84000-ema-pruned) for most of my renders.

    I've found that a lot of other custom VAEs have mixed results (also I'm using Easy Diffusion so that makes things slightly different than WebUI). That's why I usually lean towards models that don't have a custom VAE baked in (or don't explicitly say they have a new VAE baked in)

    94655Jun 17, 2023

    @Lykon Also I prefer non-vae versions for merging models myself as things get too chaotic when there's a custom vae built into a model

    fansayJun 16, 2023· 1 reaction
    CivitAI
    Lykon
    Author
    Jun 16, 2023

    second link says 404 :(
    @JustMaier bug?

    fansayJun 16, 2023

    @Lykon Flagged for review

    This image won't be visible to other users until it's reviewed by our moderators.

    Lykon
    Author
    Jun 16, 2023

    @fansay I see

    1889458Jun 16, 2023
    CivitAI

    One of the best model outta there!

    DantegonistJun 18, 2023
    CivitAI

    Hey. Your discord invite isn't working(

    Lykon
    Author
    Jun 18, 2023· 1 reaction

    updated

    MrOhyaoJun 19, 2023
    CivitAI

    Hi! Like your models. Can anyone tell me what the Diffuser Model is for?

    Lykon
    Author
    Jun 19, 2023· 3 reactions

    it's the file format for invoke ai and diffusers library in python

    Snow_TJun 23, 2023· 2 reactions
    CivitAI

    Lykon, thank you very much for this wonderful model. I can create anything that comes to my mind and thanks to your work, I found a great hobby!

    xtraJun 23, 2023
    CivitAI

    Hey Dreamshaper is air to me, can I help you at all? Like is there some series of training data images that you need someone to work on retrieving? Or anything like that?

    OneViolentGentlemanJun 23, 2023· 3 reactions
    CivitAI

    It is an amazing model. I just wish it wouldn't put cowboy hats on people when I prompt for "cowboy shot". That isn't what I mean, dammit! 😂

    DonMclameJun 24, 2023· 1 reaction

    try adding cowboy hat to the negative prompt

    @dwellerdom Why didn't I think of that, lol. Thanks!

    Lykon
    Author
    Jun 26, 2023

    cowboy shot is a booru tag.

    @Lykon Hmm, I'm not quite sure what you mean. Should I avoid certain tags or something? Isn't pretty much everything a booru tag?

    Lykon
    Author
    Jun 27, 2023

    @StatOpre some booru tags are not 100% compatible with caption trained models.

    ah, thanks. :)

    johnnys_aiJun 25, 2023
    CivitAI

    Im unable to load this model as it gives me this error when i try. This error only comes up with his specific model weirdly. Any idea how to fix?

    SafetensorError: Error while deserializing: MetadataIncompleteBuffer
    ERROR: Failed to load checkpoint(...path for dreamshaper631..)

    Lykon
    Author
    Jun 26, 2023

    this is likely a problem on your end.

    settima_aiJun 28, 2023· 1 reaction

    Partial download, probably.

    johnnys_aiJun 28, 2023· 1 reaction

    @settima_ai @lykon You're right. After redownloading 3 times I got it to work correctly. Thank you

    KetsubanJun 26, 2023
    CivitAI

    Thanks for this checkpoint. I notice it doesn't seem to do well with asking for both horns and a hat—mostly it ignores the fact you asked for horns, and trying to force it by adding "horns through headwear" to the positive prompt tends to produce blended results. Is this something that could be improved by a LoRA?

    Lykon
    Author
    Jun 27, 2023

    yeah you can probably make a lora or embedding for that.

    xperia256Jun 30, 2023
    CivitAI

    @Lykon one question if I may ask, do you currently use dreambooth from "kohya_ss" UI to create/train your models and LoRAs or do you completely use something else? wanted to create my own model but I don't know what's the best way to do it :P Thanks in advance.

    SemesisJul 2, 2023· 1 reaction

    I use Kohya_ss on Google Colabe and it work very good and fast on speed 👍🏼 I find Dreambooth too long and you come out narrow

    Lykon
    Author
    Jul 2, 2023· 1 reaction

    I use scripts based on kohya

    2mykent6Jul 2, 2023
    CivitAI

    I have seen some artists' styles collection using dreamshaper 3.2 version, why this version is not listed here? where can i download?

    Lykon
    Author
    Jul 2, 2023

    3.2 never existed. It was probably using 3.32.
    Anyway there is no reason to ever use 3.32. Just use 4, since it's basically a fix over 3.32 giving basically the same results. But I'd always go with the last version available (6.31 atm).