CivArchive
    Preview 1402662

    DreamShaper - V∞!

    Please check out my other base models, including SDXL ones!

    Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
    Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee
    🎟️ Commissions on Ko-Fi

    Join my Discord Server

    For LCM read the version description.

    Available on the following websites with GPU acceleration:


    Live demo available on HuggingFace (CPU is slow but free).

    New Negative Embedding for this: Bad Dream.

    Message from the author

    Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.

    DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
    With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.

    Not before one. Last. Push.

    And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.

    PS: the primary goal is still towards art and illustrations. Being good at everything comes second.


    Suggested settings:
    - I had CLIP skip 2 on some pics, the model works with that too.
    - I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
    - All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
    - I don't use "restore faces".

    For old versions:
    - Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
    -- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
    -- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
    -- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).

    LCM

    Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.

    Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).

    Comparison with V7 LCM https://civarchive.com/posts/951513

    NOTES

    • Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!

    • Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.

    • Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.

    • Version 5 is the best at photorealism and has noise offset.

    • Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.

    • V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.

    • Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).

    • I get no money from any generative service, but you can buy me a coffee.

    • You should use 3.32 for mixing, so the clip error doesn't spread.

    • Inpainting models are only for inpaint and outpaint, not txt2img or mixing.

    Original v1 description:
    After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.

    I hope you'll enjoy it as much as I do.

    Official HF repository: https://huggingface.co/Lykon/DreamShaper

    Description

    NOT for training and NOT for txt2img. Use only for inpainting and outpainting if the base model is not powerful enough or good at removing stuff.

    FAQ

    Comments (41)

    EoworfindirJul 4, 2023· 2 reactions
    CivitAI

    Just wanted to say thanks for all the work you put into making everything! Cheers!

    poleaxJul 4, 2023
    CivitAI

    I love your work! Quick question: should we use a VAE with 7? If so, which one?

    EvoK07Jul 4, 2023· 1 reaction

    If you don't have one already, I think you can't go wrong with vae-ft-mse-840000-ema-pruned.safetensors (found on Huggingface from StabilityAI). when they don't have one included in the checkpoint. If you use Automatic 1111, put it in \models\VAE and then in Settings -> Stable Diffusion, make sure to check "Ignore selected VAE for stable diffusion checkpoints that have their own..."

    poleaxJul 4, 2023

    @EvoK07 Thanks!

    Lykon
    Author
    Jul 5, 2023

    @poleax  you can use Auto

    jimmy842022970Jul 4, 2023· 1 reaction
    CivitAI

    怎么屏蔽这个垃圾,xiodf3130 ,总发一些很恶心的图恶心人,哪里都有他

    bullseyetrollJul 4, 2023· 1 reaction

    選一張他的圖右上點開就有

    richpizor323Jul 4, 2023· 1 reaction
    CivitAI

    Does V7 require VAE? If so is a baked VAE version coming?

    Lykon
    Author
    Jul 5, 2023

    you can use Auto.

    bryantnsfw652Jul 4, 2023· 1 reaction
    CivitAI

    I noticed in a lot art styles in your example images. What artists/styles is this model trained?

    Lykon
    Author
    Jul 5, 2023· 1 reaction

    many, but most keywords are from SD1.5 base.

    TheLoraCollectiveJul 4, 2023
    CivitAI

    Can anyone please explain the point of inpainting models? I have had no issues inpainting with the regular models.

    xperia256Jul 5, 2023· 4 reactions

    Inpainting models are much better and more accurate for inpainting and they can detect the mask more accurately and the generated image won't have much mask edges like with the regular models, you can try yourself. They can give you much better results. They are especially good and better with out-painting than regular models as well.

    On a side note, I personally prefer to inpaint with inpainting models rather than using the controlnet inpaint model.

    Lykon
    Author
    Jul 5, 2023· 3 reactions

    Adding to what xperia said, inpainting models are generally better at inpainting at higher resolutions and higher denoising strength.

    CaL9Mi7YJul 5, 2023

    @xperia256 On that last note: Really? I've heard nothing but glowing praises for CN Inpaint, so you saying that "standalone" inpainting models are actually better than it (significantly, I presume?) has made me AT LEAST re-think my workflow. Especially considering CN Inpaint's other un-sung advantages like its ability to supplement fellow-extensions like ADetailer, and not to mention, of course, its freedom to work with other checkpoint models. Unfortunately, because of IRL, I simply don't have time to do extensive comparisons, so I tend to take people's advice at face value, IF they make sense for me of course. Hmm, IDK... In any case, thanks for the tidbit of info, mate. 👍

    xperia256Jul 5, 2023· 2 reactions

    @CaL9Mi7Y Ye for what I do I prefer in-painting models over CN, with inpainting models you have more control on what you want to add and better fine-tuning, CN in-painting can be better if you want the exact same pose or if you're in-painting a design and don't want the objects to change places for example as it only replace them but with the same shape. For out-painting however, inpainting models are better than using controlnet inpaint+lama in my experience. I inpaint/outpaint a lot and CN never gave me that real look inpainting models can do. I recommend using some negative embeddings with inpainting models like BadDream FastNegativeEmbedding UnrealisticDream (depending on what you're inpainting), they can improve the quality and outcome significantly.

    CaL9Mi7YJul 5, 2023· 1 reaction

    @xperia256 I do like outpainting my stuff, but for whatever reason, I always end up using it minimally. But okay. I'll keep your tips in mind, just in case. Thanks.

    nnq2603Jul 6, 2023

    @Lykon Was the inpainting model of your models or popular models here created by this method?  reddit.com/r/StableDiffusion/comments/zyi24j/how_to_turn_any_model_into_an_inpainting_model/

    Lykon
    Author
    Jul 6, 2023

    @nnq2603 that's the only method as far as I know

    jerfJul 5, 2023
    CivitAI

    What is meant by "More LORA support" in the description? More compatibility with LORAs trained on different checkpoints?

    Lykon
    Author
    Jul 5, 2023

    mostly character loras trained on anylora or nai.

    S____Jul 5, 2023
    CivitAI

    @Lykon Can you help me? How you train your inpaint model? I want to create inpaint model but can't find any script to do it.

    Lykon
    Author
    Jul 5, 2023· 1 reaction

    Doesn't exist. It's made by having sd1.5 subtracted from your model and add that to sd1.5 inpainting. This also makes your model and inpainting model to be visually compatible and use the same text encoder.

    elRivxJul 5, 2023
    CivitAI

    Hi!

    Is V7 model VAE required?

    Greetings :8)

    Lykon
    Author
    Jul 5, 2023· 2 reactions

    you can use Auto.

    elRivxJul 6, 2023· 1 reaction

    @Lykon Thanks a lot :8)

    TalojiJul 5, 2023
    CivitAI

    Is V7 only available as pruned? Are you planning to release a full model?

    Lykon
    Author
    Jul 5, 2023· 1 reaction

    I'll have to see if I can find it.

    zx96Jul 11, 2023

    @Lykon would be great to have the unpruned yes, for archival reason and to try if it makes any difference.

    amanjain1397212Jul 5, 2023
    CivitAI

    I was trying your model using the diffusers repository here at https://huggingface.co/Lykon/DreamShaper/tree/main

    When I load your model via Diffusers, i get this message that "unet/diffusion_pytorch_model.safetensors not found".

    Does this affect the model performance in any way?

    Thanks

    Lykon
    Author
    Jul 5, 2023

    well yeah, unet is the main part of the model.

    Lykon
    Author
    Jul 5, 2023

    odd, it should look for unet/diffusion_pytorch_model.bin, not sure why it's expecting safetensors. unet/diffusion_pytorch_model.bin is there however.

    amanjain1397212Jul 8, 2023

    I mean I able to generate the outputs correctly. Not sure why this message keeps popping up.

    Lykon
    Author
    Jul 8, 2023

    @amanjain1397212 maybe it's just a warning? If you're still able to generate images correctly you can probably ignore it.

    amanjain1397212Jul 9, 2023

    Yeah seems like a warning to me too. Anyway thanks for the help! Keep up the good work!

    rentarrent940Jul 5, 2023
    CivitAI

    Are higher versions improvements or simply different flavors of the model?

    Lykon
    Author
    Jul 5, 2023· 1 reaction

    they're all improvements to me. Different flavors get posted as separate models (see Absolute Reality)

    nnq2603Jul 6, 2023
    CivitAI

    Seems to struggle to generate decent shark image without too much time tweaking prompt. Can you provide some tip to get consistent decent shark renders. For example an image of tom cruise driving motorbike chased by giant shark on the beach . Weighting increasing for shark and/or adding negative like baddream, unrealisticdream, or other popular nega... doesn't seem to work either (neither work without any negative at all)

    Even if just prompt for a shark as main object of the image it still looks horrible mutated with normal/standard/regular prompt.

    Lykon
    Author
    Jul 6, 2023

    I've tested a bit and Sharks look mostly like sharks, while not perfectly anatomically correct (that's normal for a diffusion model that's not trained specifically on sharks). It's the "chasing" bit of your prompt that doesn't seem to work, but that might be an issue with the text encoder. I suggest you resort to controlnet for composition and to embeddings/loras for shark anatomy.

    Lykon
    Author
    Jul 6, 2023

    that's what I mean: https://imgur.com/4GMfM8T

    MissyLuxiiJul 6, 2023· 1 reaction
    CivitAI

    Gorgeous