DreamShaper - V∞!
Please check out my other base models, including SDXL ones!
Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee ☕
🎟️ Commissions on Ko-Fi
Join my Discord Server
For LCM read the version description.
Available on the following websites with GPU acceleration:
Live demo available on HuggingFace (CPU is slow but free).
New Negative Embedding for this: Bad Dream.
Message from the author
Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.
DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.
Not before one. Last. Push.
And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.
PS: the primary goal is still towards art and illustrations. Being good at everything comes second.
Suggested settings:
- I had CLIP skip 2 on some pics, the model works with that too.
- I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
- All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
- I don't use "restore faces".
For old versions:
- Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
-- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
-- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
-- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).
LCM
Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.
Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).
Comparison with V7 LCM https://civarchive.com/posts/951513
NOTES
Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!
Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.
Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.
Version 5 is the best at photorealism and has noise offset.
Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.
V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.
Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).
I get no money from any generative service, but you can buy me a coffee.
You should use 3.32 for mixing, so the clip error doesn't spread.
Inpainting models are only for inpaint and outpaint, not txt2img or mixing.
Original v1 description:
After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.
I hope you'll enjoy it as much as I do.
Official HF repository: https://huggingface.co/Lykon/DreamShaper
Description
This version has baked in vae. It's finally super easy to get nice colors. Just remember to set your vae settings to auto or none.
FAQ
Comments (25)
what is the difference between SafeTensor and PeackleTensor? (no links pls)
ckpt can contain code that's executed by the software, sefetensor can't. The data used to generate images is the same and the results are identical. However safetensors are not supported by every client yet. Just go with safetensors if you can
@lykon Thanks man!
Love the model so far! I initially tried to reproduce images with some other models like proto and got good results, but every prompt and every model vary so much. The one I really liked (https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6fdc4c6c-dda8-450c-67fc-20b794072500/width=1088) I couldn't get. So, I downloaded your model (the collection is getting big, LOL) and had a good play. I managed to get close on many, but not that one. I used the prompt and seed, and then I finally noticed it was inpainting (DOH!). Would you have any steps for that one? I love how the clouds and lightning converge, and the cloud texture.
I have had great success with colors despite not using the recommended VAE. I have used the Anything 3 one and vae-ft-mse-840000-ema-pruned. I did not download the kl-f8-anime2 file as it showed as pickled. I know some pickles are still safe. Does anyone know if that VAE is? You added baked, this is great. We can swap and get the safe one with VAE baked.
@lykon Question - Hires on your samples, is that the new hires fix or the old one?
(as I wrote this site went down..back up now phew)
I'm just using the normal highres fix from img2img in auto1111 with the best upscaler for the image I'm working on (eg: foolhardy for anime).
Vae is not required in the latest version, as it's baked in (use auto or none).
That thundercloud is img2img. I just painted some colors in photoshop and inpainted with high denoise. Then did some lower denoise passes after mixmeshing the results I liked the most.
Thanks for the info. on the Hires it was the txt2img examples that used it that I was curious with. I find the new hires fix usually caused me memory issues as opposed to the older one.
yeah I feel you. I often don't go to 2x because I run out of memory, and stop at 1.8x
Excellent mate!!!
someone can help me to genitals in that model
Corneo's embeddings should work fine on this model. This has some anime models inside.
What did i do wrong?
I try the trial of sinkin.ai, the results are like this...
https://imgur.com/a/UMoOzQ0
But my local dreamshaper 3.3 safetensor are like this,
https://imgur.com/a/AcJeFHt
Used the same settings...
Positive prompt: sexy pose, (mignon), (looking at viewer:1.5), (watercolor), young woman, girl, solo, sport bra, sport short, blonde hair, long hair, (long_hair), messy hair, ((huge_breasts, breasts)), face, perfect, masterpiece, simple_background, ((masterpiece, fullbody, pixiv)), artstation, indoor
Negative prompt: bad anatomy, watermarks, text, signature, blur, messy, low quality, sketch by bad-artist, bad-image-v2-39000, necklace, glasses, 3D, disfigured, kitsch, ugly, oversaturated, greain, low-res, Deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, missing limb, blurry, floating limbs, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, ugly, disgusting, poorly drawn, childish, mutilated, mangled, old, surreal, calligraphy, sign, writing, watermark, text, body out of frame, extra legs, extra arms, extra feet, out of frame, poorly drawn feet, cross-eye, blurry, bad anatomy
Sampler: Euler A
Steps : 20
Clip: 2
ENSD: 31337
Seed: random
HighRes Fix Checked
If i remove the (watercolor) tag, still got different result.
@jzli I summon you
I don't know if sinkin ai is using clip skip 2. And I don't think they're using Euper A. Try dpm sde karras
I found a thread on Reddit talking about this: https://www.reddit.com/r/StableDiffusion/comments/xlvnpm/stable_diffusion_with_diffusers_vs_wihout/
one comment worth noting: "By default, Automatic runs in half precision, which is always going to make a different image than full precision, and usually with less quality and more artifacts in most things."
Hello! Model seems very interesting.
I'm just wondering, does the mix contains dreamlikeart or any other model which could have more restrictive licenses than the one you already wrote?
Could you explain what https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (`mksks style`) IS? Am I placing this in my LoRA location, or my models folder? Am I keeping it named last.pt and calling it with 'mksks style' to utilize it? I guess I'm not sure where to place it, it's much larger than my other embeddings...
Also where do download 'art by negprompt5' from? I can't seem to find it, as used in some of yours...
It's a lora.
@lykon Ahhh, thanks, so in the LoRA models folder..got it...just learning how to use that recently...so I do keep it names last.pt and use mksks style in the prompt? Cool.
What about the "'art by negprompt5" negative embeddings I see being used on some of the demo images?
@clevnumb those are just negative embeddings you can probably find around, but I don't think they actually improve the result.
@Lykon you solve my problem tks
This model suffers from this problem (at least the safetensors version I downloaded): https://rentry.org/clipfix
It ignores parts of the prompt, after fixing it the results are much better and more predictable, I would suggest the uploader to fix it and re upload the file, and also keep an eye on it for future releases
uploading the fix. Thank you.
quick test after the fix: https://imgur.com/OYquYDe
Basically no change on my usual prompts, but should understand sentences better
uploaded
Details
Files
dreamshaper_331BakedVae.ckpt
Mirrors
DreamShaper_3.3_baked_vae_pruned.ckpt
DreamShaper_3.3_baked_vae_pruned.ckpt
54_dreamshaper_331BakedVae.ckpt
DreamShaper_3.3_baked_vae_pruned.ckpt
DreamShaper_3.3_baked_vae_pruned.ckpt
DreamShaper_3.3_baked_vae_pruned.ckpt
DreamShaper_3.3_baked_vae_pruned.ckpt
DreamShaper_3.3_baked_vae_pruned.ckpt
DreamShaper_3.3_baked_vae_pruned.ckpt
dreamshaper_331BakedVae.safetensors
Mirrors
dreamshaper_331BakedVae.safetensors
dreamshaper_331BakedVae.safetensors
4384_dreamshaper_331BakedVae.safetensors
dreamshaper_331BakedVae.safetensors
55_dreamshaper_331BakedVae.safetensors
dreamshaper_331BakedVae.safetensors
dreamshaper_331BakedVae.safetensors
dreamshaper_331BakedVae.safetensors
dreamshaper_331BakedVae.safetensors
DSP.safetensors
dreamshaper_331BakedVae.safetensors
DreamShaper_3.3_baked_vae.safetensors
DreamShaper_3.3_baked_vae.safetensors
dreamshaper_331BakedVae.safetensors
DreamShaper_3.3_baked_vae.safetensors
dreamshaper_331BakedVae.safetensors
DreamShaper_3.3_baked_vae.safetensors
DreamShaper_3.3_baked_vae.safetensors
DreamShaper_3.3_baked_vae.safetensors
DreamShaper_3.3_baked_vae.safetensors
dreamshaper_331BakedVae.safetensors
Mirrors
dreamshaper_331BakedVae.safetensors
56_dreamshaper_331BakedVae.safetensors
DreamShaper_3.3_baked_vae_pruned.safetensors
DreamShaper_3.3_baked_vae_pruned.safetensors
DreamShaper_3.3_baked_vae_pruned.safetensors
DreamShaper_3.3_baked_vae_pruned.safetensors
DreamShaper_3.3_baked_vae_pruned.safetensors
DreamShaper_3.3_baked_vae_pruned.safetensors
DreamShaper_3.3_baked_vae_pruned.safetensors
dreamshaper_331BakedVae.ckpt
Mirrors
4384_dreamshaper_331BakedVae.ckpt
dreamshaper_331BakedVae.ckpt
57_dreamshaper_331BakedVae.ckpt
dreamshaper_331BakedVae.ckpt
dreamv1.ckpt
DreamShaper_3.3_baked_vae.ckpt
dreamshaper_331BakedVae.ckpt
DreamShaper_3.3_baked_vae.ckpt
dreamshaper_331BakedVae.ckpt
DreamShaper_3.3_baked_vae.ckpt
DreamShaper_3.3_baked_vae.ckpt
DreamShaper_3.3_baked_vae.ckpt
DreamShaper_3.3_baked_vae.ckpt
DreamShaper_3.3_baked_vae.ckpt
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.

















