CivArchive
    Preview 966469

    DreamShaper - V∞!

    Please check out my other base models, including SDXL ones!

    Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
    Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee
    🎟️ Commissions on Ko-Fi

    Join my Discord Server

    For LCM read the version description.

    Available on the following websites with GPU acceleration:


    Live demo available on HuggingFace (CPU is slow but free).

    New Negative Embedding for this: Bad Dream.

    Message from the author

    Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.

    DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
    With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.

    Not before one. Last. Push.

    And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.

    PS: the primary goal is still towards art and illustrations. Being good at everything comes second.


    Suggested settings:
    - I had CLIP skip 2 on some pics, the model works with that too.
    - I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
    - All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
    - I don't use "restore faces".

    For old versions:
    - Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
    -- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
    -- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
    -- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).

    LCM

    Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.

    Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).

    Comparison with V7 LCM https://civarchive.com/posts/951513

    NOTES

    • Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!

    • Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.

    • Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.

    • Version 5 is the best at photorealism and has noise offset.

    • Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.

    • V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.

    • Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).

    • I get no money from any generative service, but you can buy me a coffee.

    • You should use 3.32 for mixing, so the clip error doesn't spread.

    • Inpainting models are only for inpaint and outpaint, not txt2img or mixing.

    Original v1 description:
    After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.

    I hope you'll enjoy it as much as I do.

    Official HF repository: https://huggingface.co/Lykon/DreamShaper

    Description

    FAQ

    Comments (27)

    doszhuMay 30, 2023· 1 reaction
    CivitAI

    Can someone tell me what is dierence between v6 vae baked and v6 diffuser please? Which one is better?

    Lykon
    Author
    May 30, 2023· 5 reactions

    they're exactly the same, just different format. Diffusers are supposed to be used in python and some clients, while safetensors are for auto1111 and most other clients.

    doszhuMay 31, 2023

    @Lykon thanks

    xtraJun 4, 2023· 1 reaction

    diffusers are currently used in InvokeAI and perhaps others

    SugarBotMay 31, 2023
    CivitAI

    Thanks for amazing work.
    i noticed there is a inpainting model , what's the difference between inpainting one and commons?
    And can people use v6-inpainting on v5-based pics seamlessly?

    Lykon
    Author
    May 31, 2023

    It's possible. However you'd better use the dedicated v5 inpainting.

    Inpainting models are based on SD-inpainting and work much better for inpainting large areas at 100% inpainting and at any resolution

    SugarBotMay 31, 2023

    @Lykon since started play with sd, i hardly have noticed any difference between using an inpainting model or not.
    my steps is masking, checking latent noise, changing 1 or 2 target words or seed then generating another. is there anything wrong about the workflow ?

    Lykon
    Author
    May 31, 2023

    @asuka1zero you're probably using a low denoising strength. If you need to do 100% strength inpainting and at any resolution a normal model won't work.

    limblessgirl4Jun 1, 2023
    CivitAI

    What is the preferred upscaler? Could really use a recommedation for both photo and anime styles.

    Lykon
    Author
    Jun 1, 2023· 2 reactions

    depends on the results you're looking for.
    anime -> anime video or foolhardy
    drawings -> none or latent (with high denoise)
    photos -> NMKD-Superscale or latent (with high denoise)

    DiffussyJun 1, 2023

    4xUltrasharp is my go to, there's an anime version as well

    Lykon
    Author
    Jun 1, 2023

    @Diffussy yeah, most gans are very similar anyway. I like foolhardy for anime because it makes well defined sharp lines and flat colors, other than that they're mostly the same. And then there is the Latent family which is not really upscaling, but more a way to force SD to make more stuff and go crazy.

    limblessgirl4Jun 2, 2023

    @Diffussy I don't seem to have 4x ultrasharp. Where can I find that?

    limblessgirl4Jun 2, 2023

    @Lykon hmm... have Anime video and Anime6b, but not the rest of those. Is there a repo or a GIT I need to add? 

    limblessgirl4Jun 2, 2023

    @Lykon I have Anime video and anime 6b. Is there a repository I need to add for the others you mentioned?

    Lykon
    Author
    Jun 2, 2023

    @limblessgirl4172 they're quite easy to find on google. I'm not home right now. If you can't find them just reply here and I'll help you once I'm back

    BlurmaticoJun 3, 2023· 2 reactions
    Ant6431Jun 2, 2023
    CivitAI

    Is full model also fp16? Not fp32?

    Lykon
    Author
    Jun 2, 2023

    full can be fp16. Half precision is different from pruning

    Ant6431Jun 2, 2023

    @Lykon @Lykon Thankyou very much

    xerox604266Jun 2, 2023

    @Lykon can you upload fp32 too? i found it does slightly reduce errors, especially in rendering eyes at higher resolutions. thx

    Lykon
    Author
    Jun 2, 2023· 1 reaction

    @xerox604266 I made this at fp16, so it wouldn't change anything, it would just add zeros. Also you can simulate it with the no half option.

    1620741Jun 8, 2023

    @Lykon @ant_asmr151 but isn't that important that ealier impainting model is fp32? so could be used?

    Lykon
    Author
    Jun 8, 2023

    @DuziaLipa inpainting is still calculated as a diff. No, it doesn't matter that this was made at fp16. It just makes it lighter and makes it use less vram. Also most models I see now are around the 2gb size, which means they're fp16 and pruned. I usually make both pruned and unpruned versions available

    maxhitmanJun 3, 2023· 4 reactions
    CivitAI

    ALERT --- ALERT
    Hey Lykon, someone is using your posted images, and perhaps other stuff... they just posted this

    -- its now deleted ---

    Lykon
    Author
    Jun 3, 2023· 4 reactions

    Thanks. Reported it.

    VeiraJun 3, 2023· 3 reactions

    And its gone ^^ good job