DreamShaper - V∞!
Please check out my other base models, including SDXL ones!
Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee ☕
🎟️ Commissions on Ko-Fi
Join my Discord Server
For LCM read the version description.
Available on the following websites with GPU acceleration:
Live demo available on HuggingFace (CPU is slow but free).
New Negative Embedding for this: Bad Dream.
Message from the author
Hello hello, my fellow AI Art lovers. Version 8 just released. Did you like the cover with the ∞ symbol? This version holds a special meaning for me.
DreamShaper started as a model to have an alternative to MidJourney in the open source world. I didn't like how MJ was handled back when I started and how closed it was and still is, as well as the lack of freedom it gives to users compared to SD. Look at all the tools we have now from TIs to LoRA, from ControlNet to Latent Couple. We can do anything. The purpose of DreamShaper has always been to make "a better Stable Diffusion", a model capable of doing everything on its own, to weave dreams.
With SDXL (and, of course, DreamShaper XL 😉) just released, I think the "swiss knife" type of model is closer then ever. That model architecture is big and heavy enough to accomplish that the pretty easily. But what about all the resources built on top of SD1.5? Or all the users that don't have 10GB of vram? It might just be a bit too early to let go of DreamShaper.
Not before one. Last. Push.
And here it is, I hope you enjoy. And thank you for all the support you've given me in the recent months.
PS: the primary goal is still towards art and illustrations. Being good at everything comes second.
Suggested settings:
- I had CLIP skip 2 on some pics, the model works with that too.
- I have ENSD set to 31337, in case you need to reproduce some results, but it doesn't guarantee it.
- All of them had highres.fix or img2img at higher resolution. Some even have ADetailer. Careful with that tho, as it tends to make all faces look the same.
- I don't use "restore faces".
For old versions:
- Versions >4 require no LoRA for anime style. For version 3 I suggest to use one of these LoRA networks at 0.35 weight:
-- https://civarchive.com/models/4219 (the girls with glasses or if it says wanostyle)
-- https://huggingface.co/closertodeath/dpepmkmp/blob/main/last.pt (if it says mksk style)
-- https://civarchive.com/models/4982/anime-screencap-style-lora (not used for any example but works great).
LCM
Being a distilled model it has lower quality compared to the base one. However it's MUCH faster and perfect for video and real time applications.
Use it with 5-15 steps, ~2 cfg. IT WORKS ONLY WITH LCM SAMPLER (as of December 2023, Auto1111 requires an external plugin for it).
Comparison with V7 LCM https://civarchive.com/posts/951513
NOTES
Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!
Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.
Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.
Version 5 is the best at photorealism and has noise offset.
Version 4 is much better with anime (can do them with no LoRA) and booru tags. IT might be harder to control if you're used to caption style, so you might still want to use version 3.31.
V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.
Results of version 3.32 "clip fix" will vary from the examples (produced on 3.31, which I personally prefer).
I get no money from any generative service, but you can buy me a coffee.
You should use 3.32 for mixing, so the clip error doesn't spread.
Inpainting models are only for inpaint and outpaint, not txt2img or mixing.
Original v1 description:
After a lot of tests I'm finally releasing my mix model. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.
I hope you'll enjoy it as much as I do.
Official HF repository: https://huggingface.co/Lykon/DreamShaper
Description
Note: this one had a wrong safetensor file, fixed in the last release. If you downloaded the ckpt file, that's ok.
FAQ
Comments (24)
what VAE are you using?
added a suggested settings section in the description :)
Loving testing this model so far. My single ask is - could you update the cover girl (the brunette with armor on) so that her prompt is embedded in the graphic, just like the wizard one is? I have not been able to replicate her at all yet. I know not everyone shares the prompt, but I thought it would be fair to ask the creator of the model of the graphic on the cover image. :) Thank you!
I'm afraid civitai is eating it, I'll upload the image somewhere and paste it here. Keep in mind I might have done it with an earlier version so the hash may differ a bit (that one is probably made with DreamShaper 2.50). If you want that exact one I will upload that model too, but you should get fairly similar results with 2.52 if the rest of the config (clip skip, seed, etc) matches.
here :)
https://mega.nz/file/FMsQFYIa#4_Nk1Dy61IVQ2Xv7SqaiyIMPAOYpqUnJat7ruhSMfKc
https://mega.nz/file/IMMCkLgA#Mfpb9iqCxnhZIY_rEAEp2YmsxDDc1zFktUV6fIi_tik
https://mega.nz/file/IFtzUZhA#aB5ejhCZ-zprNDSdszTh2ZGm5hE2RB13Fr8ict89nyo
The one I posted is a mix between 2 version at different highres.fix denoising strength. Here you have both those versions and the one before highres.fix. You can verify the embedded info.
Thanks so much! This one actually trains really really well in conjunction w/Dreambooth. I'll be sure to write a review after I spend some more time with it.
I love this image, could you update the prompt on it? It has the promt for that asian girl.
believe it or not, it's the right prompt ahah. Here I'll upload the original so you can pnginfo: https://mega.nz/file/UU9BwTTQ#2vukvpwWJndMMuhwQE4M1z_GATCstmyKDlAxf9p83fE
@lykon Interesting. Do you know what resolution you set? And what upscale settings? I can't reproduce it and when I pnginfo your image it restores the settings and prompt of my own last rendered image.
@unstablestDiffuser
parameters
modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, medieval princess, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski
Negative prompt: canvas frame, cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)),((extra limbs)),((close up)),((b&w)), wierd colors, blurry, (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy, 3d render
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 3630420728, Size: 768x704, Model hash: 575d99ce, Clip skip: 2, ENSD: 31337
@lykon Thanks so much! Now I can replicate it :)
could you give us some insights as to what models you've merged? thanks
I don't remember the ratios, so it won't be useful. It's quite similar to the original protogen recipe posted on reddit, plus anime mixes (over 60%)
@lykon Are there DreamLikeArt and/or Seek Art Mega models in the mix?
Interesting ppl adding pruned.. some yt vloggers said that pruned models are good in dream booth training?is that what it is for?
Pruning makes them more flexible in general and they also take up way less vram, allowing for a bigger batch size during training, so basically you go twice faster.
Yeah I wish more people would add pruned versions :/ Thank you for doing so OP!
Love the model, and I have to say that this model so far has won my "challenge" test prompt with being able to make the best rendition of "A pair of scissors on a table"
Just typing this in creates weird scissors here too, not much better than in Protogen. Have you used a more complex prompt for that result?
Hello! I like it! It's very great job, bravo! But i don't understand: what is ENSD and how use ENSD? :D
Already explained in previous comments. It's an offset to the seed that's used in some models. You don't need it unless you want to make the exact same image as the examples.
@Madfiend,
ENSD stands for "Eta noise seed delta." In A1111 webui, it's under:
Settings Tab --> Sampler parameters
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.

















