DreamShaper - V∞!
Please check out my other base models, including SDXL ones!
Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee ☕
🎟️ Commissions on Ko-Fi
Join my Discord Server
For LCM read the version description.
Available on the following websites with GPU acceleration:
Live demo available on HuggingFace (CPU is slow but free).
New Negative Embedding for this: Bad Dream.
Message from the author
Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.
DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.
Not before one. Last. Push.
And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.
PS: the primary goal is still towards art and illustrations. Being good at everything comes second.
Suggested settings:
- I had CLIP skip 2 on some pics, the model works with that too.
- I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
- All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
- I don't use "restore faces".
For old versions:
- Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
-- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
-- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
-- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).
LCM
Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.
Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).
Comparison with V7 LCM https://civarchive.com/posts/951513
NOTES
Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!
Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.
Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.
Version 5 is the best at photorealism and has noise offset.
Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.
V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.
Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).
I get no money from any generative service, but you can buy me a coffee.
You should use 3.32 for mixing, so the clip error doesn't spread.
Inpainting models are only for inpaint and outpaint, not txt2img or mixing.
Original v1 description:
After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.
I hope you'll enjoy it as much as I do.
Official HF repository: https://huggingface.co/Lykon/DreamShaper
Description
FAQ
Comments (61)
Loving the model, my to go to so far. Do you happen to have one that does not struggle with mermaids?
seems to me like this one works just fine https://civitai.com/images/1462159?period=AllTime&periodMode=published&sort=Newest&view=categories&modelVersionId=109123&modelId=4384&postId=373280
@Lykon Kinda, it does add random elements at times. I get fins where you would not expect them and bends are at times to strong. I tried both prompts, mermaid and kneeless mermaid (was a suggested one). I could not make the inpaint work, but I am still learning so might be user error there.
@Taiatari stable diffusion 1.5 is kind of trash at anatomy, that's why we compensate with negative prompts. I mean look at hands. But What I'd do is simply do minor fixes manually and inpaint, or find a better seed, or use controlnet.
Or you can wait for SDXL finetunes ;)
@Lykon tbh, I'd rather not look at hands, they can be fuel for nightmares. I get amazing results with this model, I just found that the "mermaid"-prompt suffers a tad bit more than other.
Have you tried img2img/sketch with a mermaid input ?
whats the difference between the base 7 and the diffusers version?
different format, same model
Today's good day, fresh Dreamshaper out of the oven 🗿
how do I use the 7-diffusers version? do I just put the DreamShaper folder in the models/Stable-diffusion folder?
it's for invokeai and python diffuser library. Just use v7 safetensors in auto1111 and comfy
Version 7 is actually a new level . I love it .
Really? V7 is working infinitely worst for me :/
@seikkv7971 in what sense?
7 is a little worse than 6.32 for me. But i use specific set of lora and prompt, So probably its just because version 7 aren't fond with some of my prompt.
@WolfAI_ you probably just found a setup that works for v6. No way v7 is worse :)
@Lykon Yes probably thats the case. Need to tweak my setup a little bit. Thanks for the Amazing weights as always :)
Sorry for the noob question, but I'm having trouble googling the answer. What's the difference between the versions of the model marked as "inpainting", "diffusers", "baked-vae", and the normal version? What are they used for or specialized in? Thanks in advance!
Inpainting should be used only during img2img inpainting. You can also inpaint with the original but it may not be as good at blending in the inpaints. Diffusers are specially packed versions of the model needed by some SD user interfaces, (though not automatic1111.) The baked-vae means that you can't swap in your choice of VAE, but they have included an appropriate one. If you are asking this question this is the best one for you. The normal version allows you to apply any VAE you want, but it requires some knowledge to do this appropriately.
@pupdike Thank you pupdike!
You rule, v3.31 is still my most used model I've downloaded since months ago.
are the images generated with this model copyrighted free?
Brother, this is the wild west stage of AI, have fun while it last
depends on the subject. Pikachu, for example, is protected IP.
DreamlikeDiffusion and some others (CSR I believe) do not allow you to create or mix for commercial purposes. Everything else is fair game. DreamDiff happens to have an image watermark that you can see in some generated images. Check the license of specific models. Saying this in general about the site, don't assume anyone did their due diligence to stay within the license. If you want to make things commercially you should learn to mix your own.
whats the difference among the 3 v7 models?
inpainting one is for inpainting only, diffuser format is just a different format (users in diffusers lib and invoke)
Tested v7 on some female nude portraits & it's very nice! Light weights of 'add detail' lora complement it.
Works very well combined with multiple loras, amazing work!
Is there a way to make the female faces less anime/manga style?
No matter how hard I try, they always come out like this
If you're trying to keep it in the spirit of an animated flavor, but not in an anime style add something like Western Animation to the positive prompt combined with adding things like Anime, Asian into the negative prompt.
If you're trying to get it to be more realistic than animated, but without Anime body and face proportions add things like Hyperrealism, highly detailed face, highly detailed skin, sharp details, RAW photo, 8k UHD, DSLR, film grain, Fujifilm XT3 to the positive prompt and combine it with things like Anime, Asian, child, illustration, 3d, sepia, painting, cartoon, sketch in the negative prompt.
Increase the strength of those prompts by placing them either first or towards the front of your prompt text and adding emphasis with brackets and numerical data that's higher than the weighting of any of your other prompt data.
So, for example, if you really wanted to push the render away from an anime cartoon style you could put (Western Animation:1.3) first in your positive prompt and (Anime:1.2) first in your negative prompt. Once you run some test batches and see you're getting the style and composition you want, then start adding in your other detail prompts on the same seed and progressively push it towards the final result you're looking for.
A.I. art generation is an iterative process. Start with base prompts to get the foundational style and subject matter you want in your image and then add in more and more detailed prompts behind the base prompts on that foundational seed while using the x/y/z plot tool to see how step number and cfg affect the prompt and sampler you're using.
Also keep in mind that sometimes less is more. A lot of times people over-prompt or overuse emphasis in their prompts and create more problems than they fix. A better way to prompt is to move what you really want up front and what isn't as important towards the back. If moving it up doesn't fix it, then add brackets. If that doesn't fix it then add numerical data within the brackets to really push it. You should never end up with things bracketed towards the back of a prompt as it then takes emphasis away from the things you have up front and then you end up bracketing them and so on and so on until your prompt is a giant bloated mess.
Dreamshaper is a pretty diverse model so if you dial in your prompting there isn't much you can't achieve with it. Good luck. =)
look at the examples, not all of them are anime-like :)
@DAV3X Thank you, I have followed your advice, and apart from using the words that you have indicated, I have removed others that were left over from the prompt, and that works for me :)
Is this one made with baked vae
yep (every model is, even the ones labeled as "no vae")
What resolution is recommended to render at? (trying to recreate some of the pictures here)
You can copy the generation data of the images (bottom right when looking at an image) and paste it in a text document. It will show you all the info you need, including the size
Love the checkpoint, was wondering if you're using a specific VAE or not.
sd vae (ft mse)
It is the model I use the most. It is possibly in the top 3 of civitai. Very lora friendly. Greetings and my congratulations to its creator.
thanks
One of my favorite models. Still use V3 now. Maybe should update. Thanks a lot.
I suggest you do update ;)
Already updated. Thanks. Now must be for SDXL.
miente en los datos de las imágenes, copio los datos y ninguna aparece la misma imagen.... menudo listillo.
Yo probé reproducir sus imágenes y lo pude hacer sin problema, salen muy parecidas.
Lykon suele usar los siguientes embeddings negativos: BadDream, UnrealisticDream, FastNegativeV2. Tal vez te haya faltado instalar alguno?
try to match the settings and get the embeddings
para tener las imágenes realmente exactas, debes tener incluso la misma tarjeta gráfica
It's not a great photorealism model but it's a 10/10 in basically every other regard.
Really interested to see what the creator can come up with for SDXL if the community eventually decides to migrate to it. Because they're the gold standard for 1.5 IMO.
yeah Dreamshaper is not made with photorealism in mind. V7 is simply "better" than v6.
If you want a photorealistic version of this go to AbsoluteReality.
I've also posted an Alpha version of DreamShaper XL, you can check that out to get an idea.
Tutorial with Dreamshaper for RPG portraits
https://youtu.be/IXPN0u9b_2Q
Tutorial for Dreamshaper on RPG scenes
https://youtu.be/J4mCWc6B6Y8
Tutorial for Dreamshaper easy and Fast
How to save LoRA with base model to call in from_ckpt() of diffusers?? how did u save customized models?
sideline but someone tell me where can i find and download <lora:ER4ZQV5:1>
WHY !!! DONT U MAKE COSTUM MODELS , FOR EXAMPLE , 1-BACKGROUND/LANDSCAPES.
2-PEOPLE ANIMALS.
3-OBJECTS
THAT WOULD BE COOL .
IF THERE WOULD BE 3 MODELS EACH FOR SPECIFIC THING AND THEN PROBABLY MERGING THEM ?
Please do not make this model any more photorealistic. Dreamshaper was perfect because it made inspiring and sophisticated art in different styles.
Every model out there is photorealistic or anime, please don't become like everyone else.
I disagree I think it looks better now. If you want drawn style rev animated is very good at that.
"This model permits users to: ✓Sell images they generate " that mean i can sell my generated pic on stock photos website?,i'm ask for sure.
you can if they don't violate other copyrights (eg: if you don't make a photo of Pikachu or Mikey mouse)
@Lykon Thank you very much, you really relieved my worries.
