DreamShaper - V∞!
Please check out my other base models, including SDXL ones!
Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee ☕
🎟️ Commissions on Ko-Fi
Join my Discord Server
For LCM read the version description.
Available on the following websites with GPU acceleration:
Live demo available on HuggingFace (CPU is slow but free).
New Negative Embedding for this: Bad Dream.
Message from the author
Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.
DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.
Not before one. Last. Push.
And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.
PS: the primary goal is still towards art and illustrations. Being good at everything comes second.
Suggested settings:
- I had CLIP skip 2 on some pics, the model works with that too.
- I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
- All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
- I don't use "restore faces".
For old versions:
- Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
-- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
-- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
-- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).
LCM
Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.
Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).
Comparison with V7 LCM https://civarchive.com/posts/951513
NOTES
Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!
Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.
Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.
Version 5 is the best at photorealism and has noise offset.
Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.
V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.
Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).
I get no money from any generative service, but you can buy me a coffee.
You should use 3.32 for mixing, so the clip error doesn't spread.
Inpainting models are only for inpaint and outpaint, not txt2img or mixing.
Original v1 description:
After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.
I hope you'll enjoy it as much as I do.
Official HF repository: https://huggingface.co/Lykon/DreamShaper
Description
Not for training or mixing. This is only for inpainting and outpainting.
VAE is baked in.
FAQ
Comments (25)
Is this based on danbooru tags or someth? The model is super cool tho!
V4 partially. This is an attempt at having both worlds
New V4 on DreamBooth is giving weird results, look: https://imgur.com/hmWbKts
Maybe need to fix diffusers
oh no. Thanks for the heads up. I'll upload new diffusers asap.
@ianhmz can you check now?
@Lykon sure, I'll train now and update you ^^ thank you
@ianhmz I've just right now accepted a PR of a tested version by hunkins. I'd still be curious if the previous one worked, but this last one works for sure in case it's still broken.
@Lykon Yes, previous one was working. The V4 still training here. Takes 50 minutes
@Lykon sadly still giving weird results, look: https://imgur.com/6gak1Kb
I've also trained with another colab profile at the same time using this one https://huggingface.co/jzli/DreamShaper-3.3-baked-vae and it's working. But I've overtrained, cartoon prompts always looks realistic. I'm trying to use this same hugginface to train less now.
I've trained this 3.3 version less now.. but still doesn't look nothing to the prompts I'm tesint + ControlNET. Looks like isn't optimized to use with DreamBooth. Can you try later to check? The funny of those cool models is personalize your own face or family.
@ianhmz check other reviews because someone was able to do it.
@Lykon where?
@xin811 scroll down to a bearded guy wearing sunglasses
Could you please explain the Baked Vae version :
In Automatic1111 one can choose a VAE, or force SD to use a VAE if it's in the same folder as the checkpoint (same_name.vae.pt).
When the VAE is "baked", does that mean that in Automatic1111, I need to manually set the VAE to : "automatic", or "none" ? If I let the VAE set to something in the settings, does that mean that using a baked VAE checkpoint... Stable Diffusion will use 2 VAEs ? when generating the image. A bit confusing ;)
I don't really get how "baked" works given that there's no option talking about that in Automatic1111.
Please advise. Thanks.
P.S : As I'd prefer to choose the VAE manually, as it gives way more styling options, could you please also precise which VAE you've been baking in there ? vae-ft-mse-840000 I suppose ?
Thanks.
In auto1111 webui, if a model has baked vae and you select another vae from settings, you overwrite the baked vae. Like if the model has no vae at all. This is the reason why I find odd that people ask me for a no vae version, but maybe they use other SD softwares.
If you set to "auto" it will use the vae inside the model. Can't remember what "none" does, but you shouldn't really select "none", as there is no advantage in that.
Baked vae means there is a vae inside the model. If you use your vae you can bypass it. If you use "auto" you'll use the one inside the model. It's just practical.
vae-ft-mse-840000 is the one I baked in, yes.
Very detailed and useful answer. It all makes now. Thanks a lot Lykon for your precious help. Much appreciated ! Cheers.
Which version do you recommend for the most photo realistic output?
3 and 4. But this model doesn't aim at true photorealism.
The V4 baked-in VAE is going to be the best option here IMO. Although this isn't a checkpoint such as Life Like Diffusion for true photorealism. But it does provide great results for more realistic illustration anatomy than many other checkpoints.
Obviously go with fp16 is you want to use less VRAM, since the lower accuracy of floating point 16-bit in my experience has little effect on the inference used to generate waifus :D
Anyone know how to fix "UnpicklingError: invalid load key, '\x9f'." using 4 baked vae fp16?
Details
Files
dreamshaper_4-inpainting.safetensors
Mirrors
dreamshaper_4-inpainting.safetensors
4384_dreamshaper_4-inpainting.safetensors
dreamshaper_4-inpainting.safetensors
43_dreamshaper_4-inpainting.safetensors
DreamShaper_4BakedVae-inpainting.inpainting.safetensors
dreamshaper_4-inpainting.safetensors
DreamShaper_4BakedVae-inpainting.inpainting.safetensors
DreamShaper_4BakedVae-inpainting.inpainting.safetensors
DreamShaper_4BakedVae-inpainting.inpainting.safetensors
DreamShaper_4BakedVae-inpainting.inpainting.safetensors
DreamShaper_4BakedVae-inpainting.inpainting.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


