DreamShaper - V∞!
Please check out my other base models, including SDXL ones!
Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee ☕
🎟️ Commissions on Ko-Fi
Join my Discord Server
For LCM read the version description.
Available on the following websites with GPU acceleration:
Live demo available on HuggingFace (CPU is slow but free).
New Negative Embedding for this: Bad Dream.
Message from the author
Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.
DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.
Not before one. Last. Push.
And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.
PS: the primary goal is still towards art and illustrations. Being good at everything comes second.
Suggested settings:
- I had CLIP skip 2 on some pics, the model works with that too.
- I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
- All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
- I don't use "restore faces".
For old versions:
- Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
-- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
-- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
-- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).
LCM
Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.
Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).
Comparison with V7 LCM https://civarchive.com/posts/951513
NOTES
Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!
Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.
Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.
Version 5 is the best at photorealism and has noise offset.
Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.
V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.
Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).
I get no money from any generative service, but you can buy me a coffee.
You should use 3.32 for mixing, so the clip error doesn't spread.
Inpainting models are only for inpaint and outpaint, not txt2img or mixing.
Original v1 description:
After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.
I hope you'll enjoy it as much as I do.
Official HF repository: https://huggingface.co/Lykon/DreamShaper
Description
FAQ
Comments (33)
Amazing work mate! 🧡
One of the best mix, as all the previous versions.
Is it possible for the v5, make a smaller pruned-fp16, without the baked vae ?
tried the v5 model amazing for people love it, just having problems prompting other stuff, what would you prompt for like a dark dragon flying over a city
dragon anatomy is kind of a problem for most models. But there is a lora for it.
@Lykon ahhh akay
Ohh wow, my favorite model updated, can't wait to have fun wit it. Cheers @Lykon
My favorite model updated! V5 is here! WOW! Thanks!!
About noise offset, as you know it makes resulted images darker most of the time, is it enabled on the prompt by a trigger word or it's enabled all the time?
Thanks again for this update!
I am wondering the same. Should we use the LORA <lora:epiNoiseoffset_v2:1> or <lora:epiNoiseoffset_v2Pynoise:1> ?
Suppose I use this code to use your model
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("Lykon/DreamShaper")
Will this pick the latest model weights that you have uploaded at Huggingface? In this case the DreamShaper v5 version.
it will probably use v4, I didn't make diffusers for v5 yet
The file "Full Model fp32 (6.88 GB) Safetensor" is a duplicated of the fp16 version
Should be fixed. I asked the civitai team to check
Can anyone tell me a good width/height ratio, I always got two heads, thanks.
512x768, if you use highrex fix - play with the amount of steps and denoise on highres
with a negative title
512x512 generates the best images, you can always resize it in txt2img to double the resolution with like 0.5 denoise and upscale it again in extras.
what vae should use with this model
Should we use the v4 inpainting model with v5 or are you going to release a special version inpainting for v5 as well?
Released v5 inpainting
anyone have prompt for gettting dark blush,the word blush dosen't seem to work..
The humanoid robot I generated using your model is very realistic, this model is awesome!
You're awesome
Please re-upload Full Model fp32 (6.88 GB), it is pointing to the fp16 model, all downloads are the same, or upload at HuggingFace. Thank you.
ya, safetensor links are the same, ckpt links are correct
They seem correct from the editor: https://imgur.com/u686wsQ
Is this a bug with civitai?
@Lykon Those are the betas from HuggingFace, the ones on this site don't match, I downloaded the fp32 baked vae just to do a checksum compare and it does not match
@FherraZ that screen is from civitai uhm...
@Lykon Safetensor fp32 vaked vae download from Civitai does not match the checksum, just downloaded the one from HugginfFace a few minutes ago and that one does match
@FherraZ I'll reupload just in case
@FherraZ I've reuploaded DreamShaper_5_beta2_BakedVae.safetensors. If it's still wrong I'll report this conversation to the admins, hoping they can fix it or at least tell us what's going on.
@Lykon Thank you
Details
Files
dreamshaper_5BakedVae.safetensors
Mirrors
dreamshaper_5BakedVae.safetensors
4384_undefined_dreamshaper_5BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
38_dreamshaper_5BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
DreamShaper_5_beta2_BakedVae.safetensors
DreamShaper_5_beta2_BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
DreamShaper_5_beta2_BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
DreamShaper_5_beta2_BakedVae.safetensors
DreamShaper_5_beta2_BakedVae.safetensors
DreamShaper_5_beta2_BakedVae.safetensors
dreamshaper_5BakedVae.safetensors.ckpt
dreamshaper_5BakedVae.safetensors
Mirrors
dreamshaper_5BakedVae.safetensors
4384_dreamshaper_5Bakedvae.safetensors
4384_dreamshaper_5PrunedNovae.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5Bakedvae.safetensors
DreamShaper_5_beta2_BakedVae_fp16.safetensors
dreamshaper_5Bakedvae.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5Bakedvae.safetensors
DreamShaper_5_beta2_BakedVae_fp16.safetensors
DreamShaper_5_beta2_noVae_half.safetensors
dreamshaper_5BakedVae.safetensors
DreamShaper_5_beta2_BakedVae_fp16.safetensors
dreamshaper_5BakedVae.safetensors
dreamshaper_5Bakedvae.safetensors
dreamshaper5.safetensors
dreamshaper_5BakedVae.safetensors
DreamShaper_5_beta2_noVae_half.safetensors
DreamShaper_5_beta2_noVae_half.safetensors
DreamShaper_5_beta2_noVae_half.safetensors
DreamShaper_5_beta2_BakedVae_fp16.safetensors
DreamShaper_5_beta2_BakedVae_fp16.safetensors
DreamShaper_5_beta2_noVae_half.safetensors
35_dreamshaper_5BakedVae.safetensors
DreamShaper_5_beta2_BakedVae_fp16.safetensors
DreamShaper_5_beta2_noVae_half.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.

















