CivArchive
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    DreamShaper XL - Now Turbo!

    Also check out the 1.5 DreamShaper page

    Check the version description below (bottom right) for more info and add a ❤️ to receive future updates.
    Do you like what I do? Consider supporting me on Patreon 🅿️ to get exclusive tips and tutorials, or feel free to buy me a coffee

    Join my Discord Server

    Alpha2 is a bit old now. I suggest you switch to the Turbo or Lightning version.
    DreamShaper is a general purpose SD model that aims at doing everything well, photos, art, anime, manga. It's designed to go against other general purpose models and pipelines like Midjourney and DALL-E.

    "It's Turbotime"

    Turbo version should be used at CFG scale 2 and with around 4-8 sampling steps. This should work only with DPM++ SDE Karras (NOT 2M). You can use this with LCM sampler, but don't do it unless you need speed vs quality.
    Sampler comparison at 8 steps: https://civarchive.com/posts/951781
    UPDATE: Lightning version targets 3-6 sampling steps at CFG scale 2 and should also work only with DPM++ SDE Karras. Avoid going too far above 1024 in either direction for the 1st step.

    No need to use refiner and this model itself can be used for highres fix and tiled upscaling.
    Examples have been generated using Auto1111, but you can achieve similar results with this ComfyUI Workflow: https://pastebin.com/79XN01xs

    Basic style comparison: https://civarchive.com/images/4427452

    If you train on this, make sure to use DPM++ SDE sampler and appropriate steps/cfg.

    Keep in mind Turbo currently cannot be used commercially unless you get permission from StabilityAI. Get a membership here: https://stability.ai/membership

    You can use the Turbo version (not Lightning) as a non-Turbo model with DPM++ 2M SDE Karras / Euler at cfg 6 and 20-40 steps. Here is a comparison I made with some of the best non-Turbo XL models (with regular settings and turbo settings): https://civarchive.com/posts/1414848
    I have no idea why anyone would prefer 40 steps over 8, but you have the option.

    Old description referring to Alpha 2 and before

    Finetuned over SDXL1.0.
    Even if this is still an alpha version, I think it's already much better compared to the first alpha based on xl0.9.
    For the workflows you need Math plugins for comfy (or to reimplement some parts manually).
    Basically I do the first gen with DreamShaperXL, then I upscale to 2x and finally a do a img2img steo with either DreamShaperXL itself, or a 1.5 model that i find suited, such as DreamShaper7 or AbsoluteReality.

    What does it do better than SDXL1.0?

    • No need for refiner. Just do highres fix (upscale+i2i)

    • Better looking people

    • Less blurry edges

    • 75% better dragons 🐉

    • Better NSFW

    Old DreamShaper XL 0.9 Alpha Description

    Finally got permission to share this. It's based on SDXL0.9, so it's just a training test. It definitely has room for improvement.

    Workflow for this one is a bit more complicated than usual, as it's using AbsoluteReality or DreamShaper7 as "refiner" (meaning I'm generating with DreamShaperXL and then doing "highres fix" with AR or DS7).

    Results are quite nice for such an early stage.

    I might disable the comment section as I'm sure some people will judge this even if it's early stage. I also don't think this is on par with SD1.5 DreamShaper yet, but it's useless to pour resources into this as SDXL1.0 is about to be released.

    Have fun and make sure to add a ❤️ to receive future updates.

    Non commercial license is forced by Stability at the moment.

    Description

    Turbo version should be used at CFG scale 2 and with around 4-8 sampling steps. Should work with DPM++ SDE Karras (or Normal). Comparison of other samplers: https://civitai.com/posts/951781

    Can be used for highres fix and tiled upscaling. Please check the examples.

    If you train on this, make sure to use DPM++ SDE.

    FAQ

    Comments (201)

    gsgsdgDec 5, 2023· 13 reactions
    CivitAI

    The legend is back.

    steamrickDec 5, 2023· 2 reactions
    CivitAI

    Is the turbo version based on the alpha2, or did you end up doing more training at some point?

    Lykon
    Author
    Dec 6, 2023· 3 reactions

    has a lot of new stuff, but Alpha 2 is in there.

    glitter_fartDec 7, 2023

    This "turbo" version is not a turbo version, it's an LCM merge, it can't do 1 step.

    MaxfieldDec 5, 2023· 10 reactions
    CivitAI

    Dad wake up, new dreamshaper just dropped!

    shapeshifter83Dec 6, 2023· 1 reaction

    i'm already there son

    qazxswDec 6, 2023
    CivitAI

    so, it is like dreamshaper+absolutelyreality for SDXL?

    Lykon
    Author
    Dec 6, 2023· 1 reaction

    More like DreamShaper 8 on steroids :)

    rufustheruseDec 6, 2023· 4 reactions
    CivitAI

    Turbo DreamShaper XL?!? CHRISTMAS HAS COME EARLY! 💖🎅🎄🎁💖

    glitter_fartDec 6, 2023
    CivitAI

    cfg1 step 1 is the baseline, also i've yet to see anyone have any step2 changes, shrug. Sampler doesnt matter at step1, aslong as it will output.

    Lykon
    Author
    Dec 6, 2023· 1 reaction

    cfg 1 step 1 will just generate random noise :D

    glitter_fartDec 6, 2023

    so you've never used turbo ?

    sonidosenarmoniaDec 6, 2023· 1 reaction
    CivitAI

    great model

    gtmsabitlyycl28282Dec 6, 2023
    CivitAI

    DPM++ SDE Karras is indeed the best one after playing around. Though it's 2x slower than the rest. Any alternatives?

    2728625Dec 6, 2023

    Exactly. I just spent the last hour figuring out if any of the others are reasonable, and the one they mentioned is the best. The problem is that there's nothing Turbo about it. It's barely faster than using the regular model with a higher quality and more prompt following setup. The original turbo model was stupid fast with 1 step. All of these others keep getting further and further away from that and are exponentially slower. What's the point?

    Lykon
    Author
    Dec 6, 2023

    @civitai426 time per step is the same as any other model, you can just use way less steps. 4-5 is equivalent to 30 steps of older models. Anyway, consider Turbo to be just the cherry on top. The model has to be good on its own, regardless of gimmicks.

    Lykon
    Author
    Dec 6, 2023

    DPM++ SDE Normal is also fine.

    @Lykon Hey thx for the awesome model. Per image is just 2-3x faster even using the slowest SDE sampler. Was just complaining about the samplers not your model XD

    AbysselDec 6, 2023

    DMP++ 3M SDE provides significantly more details within the same number of steps as compared to DMP++ SDE Karras.

    mumblesDec 6, 2023
    CivitAI

    how many hires steps?

    Lykon
    Author
    Dec 6, 2023· 1 reaction

    5-10

    qazxswDec 6, 2023

    @Lykon what about parameters for Adetailer inpainting?

    DrFisuraDec 6, 2023· 1 reaction
    CivitAI

    God-given turbo, never been happier with a model so far. Keep it up guys

    CyclopsGERDec 6, 2023· 1 reaction
    CivitAI

    Thanks!
    This works well but with LORA´s is slower. Should we avoid Loras with Turbo?

    glitter_fartDec 7, 2023· 1 reaction

    loading times with xl lora's will depend on your system, enable lora cache if you have the memory and are using the same lora's,

    CyclopsGERDec 7, 2023

    @glitter_fart Thank you, I will check it!

    CyclopsGERDec 7, 2023

    @glitter_fart Where can I enable Lora cache in webui?

    Rating_AgentDec 6, 2023· 2 reactions
    CivitAI

    What VAE do you recomend for this model?

    use none

    Emili0Dec 6, 2023· 3 reactions
    CivitAI

    Fantastic job, Lykon! You just sold me on SDXL and made me finally make the switch. Able to produce stunning results superior to 1.5 while also doing it in the same amount of time, perfect for my lowly GTX 1080 <3

    735783177456Dec 6, 2023
    CivitAI

    May I briefly introduce whether this is directly trained on the basis of the sdxl-turbo model, or is it achieved through fusion or some other method? As Stabilityai does not provide distillation code for sdxl turbo, I am not sure if it is possible to use the code previously trained on xl to train sdxl-turbo

    glitter_fartDec 7, 2023· 3 reactions

    It's not trained at all, it's an lcm merge, and not a turbo model. All training data is public information. Anything that needs more than 1step is by default not a turbo model.

    Lykon
    Author
    Dec 7, 2023· 2 reactions

    @glitter_fart why do you keep spreading misinformation? This is my own finetune. It's definitely merged with the Turbo LCM distillation LoRa, but it's still my own finetune.

    EDGDec 6, 2023· 16 reactions
    CivitAI

    Thank you Lykon for gracing us with a Ferrari tier SDXL model

    It's turbo time!

    glitter_fartDec 7, 2023· 3 reactions

    needs 4-8 steps, thinks its turbo .......

    Lykon
    Author
    Dec 7, 2023· 2 reactions

    Emad commented yesterday on Reddit that Turbo is designed to work at 1-4 steps https://www.reddit.com/r/StableDiffusion/comments/18bnvat/comment/kc76g7q/?utm_source=reddit&utm_medium=web2x&context=3
    This is similar to how base SDXL is designed to work at 32 steps, but people use 15-80 steps anyway. 4 steps for Turbo is totally acceptable (as well as 8).

    You're free to distill a model to work at 1 step. I personally don't think it's worth the loss of quality and range. You're free to use this model and also free to not use it :)

    qazxswDec 6, 2023· 2 reactions
    CivitAI

    The prompt "full-body shot" does not work.

    Lykon
    Author
    Dec 6, 2023· 5 reactions

    I think that doesn't work in SDXL to begin with. Add stuff like "shoes" or "boots" to force the model to render them. Increase weight of cfg to 3. Keep in mind this is still a distilled model, despite performing on par with normal ones, it has to trade something for speed, so it might be a bit harder to use.
    Remember you also have controlnet

    qazxswDec 6, 2023

    @Lykon thank you for your suggestions

    glitter_fartDec 7, 2023

    to be fair, lots of things in this model dont work, like using 1 step

    qazxswDec 7, 2023

    @glitter_fart I see... guess it's better to just wait for it to catch up on SD 1.5

    glitter_fartDec 7, 2023· 3 reactions

    @qazxsw  base turbo is already ahead of 1.5, thats why it can do 1step and this isnt a real turbo model just a lame merge by someone who is very full of themselfs

    Lykon
    Author
    Dec 7, 2023· 1 reaction

    @glitter_fart I'm not sure why you keep suggesting it should work with 1 step. It's not a requirement nor something that should be expected from Turbo models. The amount of distillation is arbitrary and too much can reduce range and quality.

    yoloswagg45Dec 7, 2023

    @Lykon Lmao? Do you normally get these many haters? You are probably one of the better/best custom model builders out there, these ppl must be new.

    I hope you enjoyed your vacation btw bud :)

    Btw do you know if this will work with FooocusMRE?

    Lykon
    Author
    Dec 7, 2023· 7 reactions

    @yoloswagg45 nah they're not many, but they're sometimes very passionate. It should work with anything that's compatible with SDXL.

    danny_308601Dec 10, 2023

    Just think where we'll be in a year's time. If they can do this now top work..

    illyaeaterDec 11, 2023

    @glitter_fart Base turbo looks like shit, just like you.

    ze_thrillerDec 6, 2023
    CivitAI

    Anything special needed to use Turbo-based models ?

    Lykon
    Author
    Dec 6, 2023· 5 reactions

    Just steps, cfg and sampler to be mindful of. Check the description and the examples.

    glitter_fartDec 7, 2023· 4 reactions

    This isnt a turbo model, it's a merged lcm with turbo, at best. Turbo models, 1step 1cfg, is the base line.

    Lykon
    Author
    Dec 7, 2023· 9 reactions

    @glitter_fart that's not a rule and only depends on how much you're willing to distill the model. It's always a tradeoff with quality and range (that's why most turbo models are only good at doing 1 thing). In my opinion it's not worth giving that up for 2 steps :)

    pkeyong998916Dec 7, 2023· 7 reactions
    CivitAI

    The Return of the King!

    kellneDec 7, 2023
    CivitAI

    Add onsite generation please 🫥

    Lykon
    Author
    Dec 7, 2023· 4 reactions

    doesn't depend on me :(

    yoloswagg45Dec 7, 2023

    @Lykon Do the site admins just add it for w/e model they feel like?

    Lykon
    Author
    Dec 7, 2023

    @yoloswagg45 I think it's somewhat automated. DS8 LCM got added immediately. I'll ping @Maxfield

    KandooAIDec 7, 2023· 2 reactions

    @Lykon Turbo has a different License than a LCM Version. I am not 100 % sure, but i think it´s cause of the new License from Stability

    MaxfieldDec 7, 2023· 2 reactions

    Can't use turbo models with onsite generation yet,

    kellneDec 7, 2023

    @Lykon actually you can toggle option for onsite generation. Just ask how someone but it is possible

    kellneDec 7, 2023

    @NecroBear I understand. Is there date for video onsite generation? 💸

    PhilippSevenDec 7, 2023
    CivitAI

    How to perform Tile upscale with turbo model? I'm getting an error...

    GetLuckyGirlsComDec 7, 2023
    CivitAI

    This is so nice! Thank you! How does it play with LORAs?

    Lykon
    Author
    Dec 7, 2023· 1 reaction

    any xl lora should work on this

    NorfolkDaveDec 7, 2023
    CivitAI

    I am strugling with this.
    I used this video to set up my turbo workflow: https://www.youtube.com/watch?v=DZ2dfq8ljrc
    That working great i then swapped the checkpoint for your one.
    Set cfg to 2
    set steps to 5
    changed sampler to dpm_sde

    and the results are ok but have blotches of colours, just not coming out right. Is there a workflow example for this?

    alphaleaderDec 8, 2023

    Try 7 steps. I had to play with the balance of the two for mine. I also use the dpmpp_2m sampler.

    Lykon
    Author
    Dec 8, 2023

    don't use dpmpp_2m on this turbo model please. Check the comparison here: https://civitai.com/images/4260272

    GryphonboyDec 9, 2023

    @Lykon I'm using comfyui and I don't see an option for dpm++ SDE karras. where do I find this sampler?

    NorfolkDaveDec 9, 2023

    @Lykon I dont see the samplers in those images in the ksampler node, am i missing a custom node?

    Lykon
    Author
    Dec 10, 2023· 2 reactions

    @Gryphonboy  comfy is dpmpp_sde with karras as scheduler

    Lykon
    Author
    Dec 10, 2023

    @mrgreaper2004630 I haven't used custom nodes

    SpiderDec 8, 2023
    CivitAI

    On what image size it was trained for, for LORA training? 768^2 or 1024^2?

    Lykon
    Author
    Dec 8, 2023

    1024.

    l1071812516534Dec 11, 2023
    CivitAI

    Can Tensorrt be supported?

    Lykon
    Author
    Dec 11, 2023

    They need to get approval from Stability to run Turbo models

    BanebananeDec 11, 2023
    CivitAI

    Does anyone else have problem with eyes when using Dreamshaper XL?

    Lykon
    Author
    Dec 11, 2023

    are you generating at low resolution or from far? That's common with every model and it's due to latent space decompression. Use highres fix or adetailer.

    BanebananeDec 12, 2023

    @Lykon tnx man :)

    shapeshifter83Dec 12, 2023

    @Banebanane a further suggestions: grab the "8m" models from https://huggingface.co/Bingsu/adetailer/tree/main and throw them into models/adetailer (then reload UI). the two 8m models don't come default with the a1111 webui adetailer extension, and I've found them to be quite superior. I suggest raising the detection threshold to around 0.4ish (from default 0.3) for the 8m face model, and perhaps lowering it to 0.25ish for the 8m person model.

    Also - when using adetailer for fixing eyes, be sure to turn off CodeFormer or GFPGAN, or it's gonna be bad.

    for single subject images or only large-face close-subjects, GFPGAN can sometimes be effective on it's own (i never use CodeFormer, just seems bad all around). But GFPGAN will fail horribly at background faces and completely ruin images, and kind of makes everyone look the same (which may or may not suit your needs).

    blibbydumpusDec 11, 2023· 4 reactions
    CivitAI

    How is it that this turbo model is better than the vast majority of regular XL models out there?

    Seriously, you get better details, sharper edges, more correct anatomy, better ability to upscale with img2img and with tiled upscale, the list goes on.

    Great job!

    miketoriantDec 11, 2023· 3 reactions
    CivitAI

    How does it compare to juggernatuxl v7?

    mindrendersDec 11, 2023· 1 reaction

    In my opinion it's better. This is a turbo model so much faster.

    Lykon
    Author
    Dec 11, 2023

    try both at their respective suggested settings and find out :)

    Whiterain1000Dec 11, 2023· 1 reaction
    CivitAI

    You might want to mention the licensing for turbo.

    Lykon
    Author
    Dec 11, 2023

    it's already mentioned in red :)

    Lykon
    Author
    Dec 11, 2023· 6 reactions
    CivitAI

    I've uploaded a ComfyUI workflow and added it to the model description: https://pastebin.com/NjGG1t0W
    Includes automatic detailer nodes.

    Lykon
    Author
    Dec 11, 2023
    shapeshifter83Dec 12, 2023

    @Lykon can you clarify if the Turbo version is simply Alpha2+Turbo or is it essentially a new model with new training? Because my system is fast enough that I prefer non-Turbo checkpoints and their slightly superior quality.

    Lykon
    Author
    Dec 12, 2023

    @shapeshifter83 it's not alpha2 + turbo, like the description says (twice :D). It's an entirely new model.

    shapeshifter83Dec 12, 2023

    @Lykon thanks! are you planning to release a non-Turbo version of the new model?

    JamjamjamDec 19, 2023

    That lora in the workflow doesn't seem to work with the setup. Some errors about size mismatches.

    jr81Dec 12, 2023· 1 reaction
    CivitAI

    I'm using Comfy UI.

    DPM++ SDE Karras, gives great results. 

    Better quality than most regular SDXL models. 

    For the next version, it would be nice, if it also worked well with DPM++ 2M Karras, like "BestMixSDXL.PhotoCinema.Turbo.v1".

    This would be good, not only for faster generation, but more importantly, to get more variations out of the model. 

    shapeshifter83Dec 12, 2023

    DPM++ 2M is fairly deprecated at this point, and it's been found that many common distributions - including the most common DPM++ 2M Karras that comes with Automatic1111 - is literally bugged with an error in the code. There is a couple complex manually-applied community patch options, but regardless, it's probably not worth it. There are equally-fast samplers available with better quality and very similar determinism.

    jr81Dec 12, 2023· 1 reaction

    @shapeshifter83 I see ... I didn't knew about this. I only tried A1111 in online demos.

    In Comfy UI is what is used the most and never noticed any issues.

    But anyways, any other sampler that could be supported would be welcomed. 

    I use this to find interesting variations of images I already like. 

    I've found some with this model, but they are hard to fix. I mean to get them to the same level of quality of the recommended sampler.

    jr81Dec 12, 2023· 1 reaction

    Or perhaps just release a non-turbo version. 

    My GPU is on the slower end, but I prefer to wait a few more seconds and get the best possible results.

    In the long run, this saves time, versus trying to fix an almost good image that's not quite there yet. 

    shapeshifter83Dec 12, 2023

    @jr81 I haven't used ComfyUI yet, but in A1111 there is an option to manually override the scheduler, so the closest alternative to DPM++ 2M Karras would be to manually apply Karras scheduling to the Euler sampler. That would give virtually identical speed and determinism (seeds would look the same) while providing somewhat of a quality boost. The main reason people don't realize DPM++ 2M Karras is flawed is because it's generally the first place people discover the Karras scheduling method which is itself superior and somewhat masked DPM++ 2M shortcomings.

    My recommendation with a slower system would be to use Euler (with Karras or even Exponential or Polyexponential, if your ComfyUI has those options) and spam quick batches of 1- or 2-step images with an early approximation of the prompt you want (instead of a full descriptive prompt, just "girl standing" or "man sitting" or "group of people" or "car") and from the resulting shadowy silhouette blurs you can then select a particular seed to try to flesh out into more definition with higher steps and detailed prompt. I find that method faster than running a full prompt looking for the elusive seed that matches the scene setup envisioned in my mind. Generally the samplers will "see" things the same way you do - if you see a face or a man or a car in that vague blur, the sampler will usually see it the same way you do. (it's sort of amazing tbh).

    This method won't work as well with "SDE" samplers (because the determinism isn't as stable when changing steps) and won't work at all with "a" samplers. However, to be perfectly honest, I'm usually doing this with DPM++ SDE Karras anyway, simply because I think that's the best overall sampler. Even though "seed shadow hunting" isn't quite as easy with SDE.

    Bear in mind certain checkpoints will sometimes require SDE or even DPM++ 3M SDE (Demon Core, for example, and anything merged with that - quite a few around). You'll know it by the resulting strange warping with red tinge.

    I hope my rambling is helpful in any way. xD

    jr81Dec 12, 2023· 1 reaction

    @shapeshifter83 OK, thanks for the tip.

    In Comfy UI, the sampler and the scheduler are selected independently. The DPM++ 2M sampler definitely looks better with Karras scheduler. 

    When I have a specific position or overall shape in mind, it's usually from another picture so I tend to use Image 2 image, with different levels of denoising, and sometimes ControlNet. In Comfy UI, this does not increase generation time significantly.

    Lykon
    Author
    Dec 13, 2023

    This should work with DPM++ 3M SDE. I just don't often suggest it because I use it way less since DPM++ SDE gives in general higher quality results. There is a sampler comparison in the model description.

    carisrainsDec 12, 2023· 1 reaction
    CivitAI

    Does Turbo require different prompting? I'm not impressed with the results I'm getting, and switched back to the former DreamshaperXL. But, it reminds me of when I switched to the XL models: I was getting terrible results that looked much worse, but I needed to change both the size of generations and the way I prompted to get better results; and now I'm getting better results with XL than ever before; but not with Turbo. So I'm wondering if it is the same thing with Turbo, do you need to do something else differently? (I am using the low steps, cfg 2, dpm++ sde)

    The_FolieDec 12, 2023

    You have to be careful and not ruin your image at the upscale/hires fix stage, I usually go with 0.46 to 0.52 denoise strength. And 3-4 steps. Anymore and you overcook the image. :)

    carisrainsDec 12, 2023

    @Dafolie Thanks, I didn't highres fix any of the current experiments. The images I'm getting from the turbo model aren't bad or broken, they're just rather plain/boring compared to what I get from the older model with identical prompt.

    Lykon
    Author
    Dec 13, 2023

    in theory, this should be much superior to DreamShaper XL Alpha 2. And it definitely is according to my experiments. It's also much, MUCH, better at anime.

    carisrainsDec 13, 2023

    I seem to be getting much better results today. I'm not sure what was giving me poor results before. I did have --no-half on, which may have been causing issues, not sure if that was on the other day or not, but it was taking really long time to gen today so I checked settings and after turning that off it's back to seconds to generate and gives decent results. Hires-Fix still takes a long time though, not sure if there are better settings to speed that up, but I seem to have less hand failures with highres fix than without, so experimenting with that currently, despite it returning gens to a couple minutes and image.

    I am using a different prompt today than I was when I started this post, I'll have to revisit that prompt to be certain if it is completely better.

    carisrainsDec 14, 2023

    Here is an example, that (sort of) shows what I'm talking about:

    https://www.patreon.com/posts/which-one-do-you-94628267

    Two different sets of prompt, A & B; everything is the same between version 1 and 2, except:
    1 = Turbo, Steps 5, CFG 2

    2 = Alpha2, Steps 20, CFG 6.5

    They're all using the same seed, hires.fix, sampler, upscaler, and size. While nothing is technically wrong with any of the images, the Alpha2 results are much more what was asked for in the prompt, especially in terms of color and mood, and to me are much more creative and asthetically what I wanted and am used to from dreamshaper. The Turbo images are faster and good, but I feel like they're more like what I get from other models, not what brought me to dreamshaper in the first place. So this is what made me wonder if I should be prompting differently, as the colors and style prompts seem to have less effect in the Turbo model.

    AervenDec 12, 2023· 3 reactions
    CivitAI

    A mere seconds on gtx 1060 6gb... you are a God, m8!)

    olivettyDec 16, 2023

    Well it is because of stability AI, not this dude but I'm sure he appreciates the sentiment! :D <3

    Mr_fries1111Dec 13, 2023· 3 reactions
    CivitAI

    this model is so cool

    SeeyeahDec 13, 2023· 2 reactions
    CivitAI

    Dreamshaper SDXL Turbo is a variant of SDXL Turbo that offers enhanced charting capabilities. However, it comes with a trade-off of a slower speed due to its requirement of a 4-step sampling process. When it comes to the sampling steps, Dreamshaper SDXL Turbo does not possess any advantage over LCM.

    So, what are the differences between Dreamshaper SDXL Turbo and Dreamshape SDXL with LCM?

    Lykon
    Author
    Dec 13, 2023· 3 reactions

    This is also LCM. That being said, it's not a full distillation so it retains a high quality that's comparable to non-lcm and non-turbo models, if not superior.

    SeeyeahDec 14, 2023· 1 reaction

    @Lykon thanks!So sdxl lcm can be combined to accelerate?

    The_FolieDec 13, 2023· 4 reactions
    CivitAI

    I'm having too much fun with this. Watching it eat through detailer nodes like Kobayashi at an hot-dog eating contest is awe inspiring! :)

    1726559Dec 13, 2023· 2 reactions
    CivitAI

    I apologize for asking a silly question, I have struggled to find a clear answer on my own. I seem to get unspeakable horrors when I use the recommended number of Sampling Steps (for most models it looks like it tends to be less than 10 for Turbo XL), and only manage to get ok results around 30 steps.

    Any suggestions? My current settings that yield somewhat normal images:

    Sampling Method: DPM++ 3M SDE Karras
    Width & Height: 1000
    CFG Scale: 2
    Sampling Steps: 30
    Highres Fix: Null
    Refiner: Null

    shapeshifter83Dec 13, 2023· 1 reaction

    adjust your resolution to be exactly 1 megapixel (1024x1024), first, that will help. Besides that, I don't have any further advice since I don't use turbo, hopefully someone else sees your question and can help.

    1726559Dec 13, 2023

    @shapeshifter83 I see I was running the image a little small but even after adjusting the image to 1024x1024 it still comes out pretty bad. Thank you for trying though, I really appreciate it!

    carisrainsDec 13, 2023· 2 reactions

    Recommended sampler is DPM++ SDE Karras not DPM++ 3M SDE Karras, if you look at the examples with different samplers all the ones with the sampler you are using are terrible. Steps should be 4-7

    shapeshifter83Dec 13, 2023· 1 reaction

    oh yea, that's definitely the answer. I missed the 3M in his settings. DPM++ SDE only; Karras scheduling highly recommended and generally always superior. I haven't used Turbo like I said but from what I know about this technology, I imagine Exponential and possibly even Polyexponential scheduling might also produce quality outputs at 4 steps.

    1726559Dec 14, 2023· 1 reaction

    @carisrains @shapeshifter83 Thank you both very much!

    claudioscracher94416Dec 14, 2023· 2 reactions
    CivitAI

    I am using a google colab to launch this model through A111 i have already set DPM++ SDE Karras, 5 steps, CFG 2, i did not put a negative prompt, and it is not working for me

    3rixonDec 14, 2023

    Hey, do you mind sharing a copy of your google colab notebook? Been having a really hard time finding one that’s working without any problem

    zx3oDec 17, 2023

    @3rixon https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb
    and you could use my personal commends which i run everytime with this notebook:
    !pip install lmdb !pip install pyfunctional !pip install cchardet !pip install python-dotenv !pip install fake-useragent !pip install ZipUnicode !pip install dynamicprompts[attentiongrabber,magicprompt]~=0.30.4 !pip install torch==2.1.0+cu118 torchvision==0.16.0+cu118 torchaudio==2.1.0 torchtext==0.16.0+cpu torchdata==0.7.0 --index-url https://download.pytorch.org/whl/cu118

    I'm using them coz some of extensions doesn't work on colab correctly.

    nakryumDec 15, 2023· 1 reaction
    CivitAI

    awesome work here but I was asking myself, what Upscaling model would you suggest? I'm using 4X-ultrasharp but it tends to give not so good results. too much contrasts

    Asura_7170Dec 17, 2023· 5 reactions

    --> 8x_NMKD-Superscale_150000_G
    hires steps: 4-5
    denoising strength: 2.5-4.5

    SuzanneDec 16, 2023· 4 reactions
    CivitAI

    I tested it in Fooocus and it's very slow, slower than the "original" SDXL turbo and slower than all the other "standard" models, and the results are quite disappointing....

    I followed your recommendations for settings, but it's a bit different in Fooocus...

    are there any solutions for this? can it be optimized?

    Thanks

    fitCorderDec 16, 2023· 3 reactions

    sounds like an issue with foocus

    SuzanneDec 16, 2023

    @fitCorder I don't know, did you test it? Do you have any feedback on the subject?

    fitCorderDec 17, 2023· 1 reaction

    @Suzanne https://github.com/AUTOMATIC1111/stable-diffusion-webui I get 1 second generations with this model by Lykon using regular webui.

    SuzanneDec 17, 2023

    @fitCorder ok, thanks for your answer, but it doesn't answer my question...

    infrezz721Dec 17, 2023

    @Suzanne Fooocus already have extreme speed mode, I don't know why you need this.

    SuzanneDec 17, 2023

    @infrezz721 because I tested the "basic" SDXL turbo and it's even faster, so I thought Dreamshaper would be even better, because the other one has a lot of limitations (square images only, no NSFW)

    infrezz721Dec 17, 2023

    @Suzanne I mean why you not use Dreamshaper alpha2 on extreme speed mode if this version troublesome

    fitCorderDec 17, 2023

    @Suzanne it answers your question that its something with Fooocus

    DracianDec 17, 2023· 2 reactions
    CivitAI

    Incredible model. It allows me to create awesome western fantasy pictures. I followed all advices in the description and it's making wonder extra-fast. Definitely a must-have.

    Of course, there are still things to improve, but I think it's mostly related to the model's diversity. For example, I wish I could create more D&D-ish creatures without having to use LORAs but, honestly, it's more a detail than anything.

    So a huge thanks, mate, and keep up the good work !

    BrotherPazzo629Dec 18, 2023· 2 reactions
    CivitAI

    i'm having trouble nailing down the correct resolutions to use, so far the only one giving me good results is 1024x1024, any other recomended?

    Yannick90Dec 20, 2023

    896x1152

    pseudokawaiiDec 20, 2023· 3 reactions

    Basically every 2:3 (portrait) or 3:2 (landscape) up to 1024x1024 (not 1024x1536, it didn't produce good results consistently).

    E.g: 512x512, 512x768, 768x1152, 1152x768.

    _kaidu_Dec 20, 2023· 4 reactions
    CivitAI

    Hi,

    I was skeptical in the beginning, cause I don't want to trade quality for time. But it seems that this turbo model is indeed more than competitive to the slower alpha2.

    Just one thing I like on alpha2 more: it is less far away from sdxl base, so all my Loras work great on alpha2, while they suffer a bit in coherence on dreamshaper turbo. I think about retrain/finetune some of my Loras on dreamshaper turbo. What is the best way to do that? Simply finetuning with kohya_ss on the checkpoint? Or do I have to consider something for finetuning a turbo version?

    JellaiDec 21, 2023

    Yeah, I really want to know a good way to handle this, because I'm also training Loras, and really want it to work with this more than any other model. It would be a shame to throw away all the other models in favor of this though.

    _kaidu_Dec 22, 2023· 1 reaction

    Little update: I did something wrong on my lora. After fixing that, it works as good on turbo dreamshaper as on sdxl base. I'm totally amazed about that.

    I'm still curious about how this model was trained and what is the best way to finetune it. But I just wanna say that finetuning works in principal, much better than for other models like juggernaut.

    _kaidu_Dec 23, 2023· 2 reactions
    CivitAI

    Hi, I just wanna say this model is awesome.

    First, I didn't liked this turbo stuff so much. Why loosing quality for speed? But for some reason this model at 6 steps gives me better results than any other model on 50 steps. It's totally crazy. The only thing this model is not good at is high cfg (but this is a limitation of low step count). You can only run it and cfg 2-3 and, thus, the model does not response well to negative prompts. So if I have a very complicated prompt I sometimes have to go back go another non-turbo model. But for everything else this model is unbeatable. It works also very nice for inpainting and speeds up the whole inpainting process considerably. My Loras trained on sdxl base also works perfectly on this model.

    Just a big thanks from my side for this jewel!

    aashiDec 23, 2023
    CivitAI

    Excellent turbo model! But high Res fix seems to be taking forever now. Any tips?

    mx842919032411Dec 23, 2023
    CivitAI

    May I ask why, even though I have copied generation data to the web UI and generated a rough image, it is very blurry, and even the collarbone part of the person is distorted? Did I do something wrong? I sincerely seek advice.

    VolnovikDec 25, 2023

    Depends on your settings. But most probably it was bad resolution

    Lykon
    Author
    Jan 10, 2024

    bad resolution, no upscaling, there might be many reasons

    kris_rkJan 22, 2024

    @Volnovik @Lykon which resolutions do you recommend? Is 896x1152 good with this model? Thanks

    VolnovikJan 22, 2024

    @karetirk932  I use following (ready for ar-plus extension):

    XL1:1, 1024, 1024

    XL4:3, 1152, 896

    XL5:4, 1145, 916

    XL3:2, 1216, 832

    XL16:9, 1344, 768

    XL21:9, 1536, 640

    Of course you can swap x and y to get 9:16 etc. Also check Fooocus v2 for resolutions used there, there is a bunch of aspect ratios that I don't understand. Also note that aspect ratios above are not strict, for whatever reason Stability AI trained model that way.

    LexiusDec 23, 2023
    CivitAI

    Turbo currently cannot be used commercially unless you get permission from StabilityAI.You can still sell images normally.”

    How can selling images be authorized? SDXL Turbo forbids commercial use of the model without a Stability AI Membership. Commercial use include using the model or a derivative / finetuned model for production. I would assume producing images for selling them would definitely be considered a commercial use. No?

    BigBlueCeilingDec 23, 2023· 6 reactions

    No. The StabilityAI EULA restricts using the model itself as part of a commercial product without an agreement from StabilityAI.

    For example: You cannot use the model as the backend for an app, wherein you host the model on AWS, the app calls it and generates an image, and you charge people for using the app. In other words, you can't just launch your own Midjourney based on StabilityAI's model.

    But you can use it to generate images and you, not StabilityAI, owns those images.

    b00merangDec 23, 2023· 1 reaction

    @eoffermann I believe this is the correct interpretation.

    It should be added though that "owning those images" is a bit questionable in light that they are not protected by copyright. You are free to use them though, but so are everyone else! (applies to all models, not just Turbo)

    LexiusDec 24, 2023

    @eoffermann thanks for the answer. How about using / hosting the model as an internal tool for production / productivity? So not selling it as a service…

    Lykon
    Author
    Jan 10, 2024· 1 reaction

    The main point is: there is no way to know which model made an image, if turbo or not, unless you disclose it. It's also hard to tell if it's AI generated and from which platform.

    superchibisan158Dec 24, 2023
    CivitAI

    Is there a way to coax Dreamshaper_8 style visuals out of this?

    danteaDec 27, 2023
    CivitAI

    Not very good to use with Asian character Lora for realistic photo generation, the abosolute reality model used to works very well, hopefully this model could has more updates to catch up with those old models in 1.5.

    diffusionallusion172Dec 27, 2023· 2 reactions
    CivitAI

    Tip: If the image quality is bad, make sure you're using the DPM++ SDE Karras sampler!

    zono50Jan 22, 2024

    What do we use in comfyui?

    Mnimmo90679Dec 27, 2023
    CivitAI

    .

    aussieexpert452Jan 4, 2024

    I know.

    PainpainterDec 28, 2023
    CivitAI

    Works great, ty!

    Markinson_De_SadeDec 29, 2023
    CivitAI

    Impressed with the turbo version of the model generally! A great continuation of its standard version, very versatile in styles. Prompting milage may vary depending on how one is used to prompting. Something I have noticed, and for which I'd like some feedback from the community and hopefully the creator, is that it needs more steps (around 12 to 14) to produce better looking images, while keeping the cfg at 2. While the 7 step images don't look bad, they seem to be drastically improved with more steps and all other settings the same. What is interesting though, is that the composition changes when the steps go beyond 8 or 10. Doing an X plot with steps from 5-20 can show the sweet spot. Also, when using the "cinematic film still" in the prompt, it produces great realistic images with that cinematic feel, but its depiction of skin tends to be overdetailed to the point of looking artificial. Is there anyone with experience on this and possibly a solution?

    SleezeBagDiffusionJan 18, 2024

    All this weirdness is why I don't really like Turbo models... I don't mind waiting a few more mins for a batch of images to finish vs. Turbo models where like, OK, cool, I saved ~2 mins, but all the images have weird artifacts or don't look as good, so now what? Try to remake them in a non-Turbo model? That wastes even more time than just avoiding Turbo in the first place. RIP Dreamshapers.

    pelipasDec 30, 2023
    CivitAI

    how to use with animatediff in comfiUI? its crashes..

    Lykon
    Author
    Jan 10, 2024

    are you using the XL version on animatediff? It works for me.

    DennokieDec 31, 2023
    CivitAI

    I have big problem with eyebrows color. It's always black. I've tried amost everything to make it white/grey, to no avail. Another problem with unnatural skin color. Tried to make dark elf. Any solutions or should I change model.

    aashiJan 1, 2024

    Use inpaint mode with high denoise strength.

    LyrinamiJan 8, 2024

    You might also try with "latent noise" for masked content, low values for "Only masked padding, pixels", and use an ancestral sampler like DPM++ 2s a, instead of DPM++ 2M Karras. Make sure CFG is high enough. Besides, I'd recommend using a 1.5 inpainting model for this task.

    Lykon
    Author
    Jan 10, 2024

    likely some word or sentence that overfits on dark eyebrows

    APPLAEJan 2, 2024· 3 reactions
    CivitAI
    Thank you for your efforts. It's a really great model. Just one question. Is the method of fine-tuning SDXL Turbo not much different from fine-tuning SDXL or SD1.5? (Optimizer, Parameters, etc..) Or is there a separate fine-tuning method for Turbo? I would be happy to hear your advice.
    blugailJan 5, 2024
    CivitAI

    I'd really like to see an update to the non-turbo edition. I like the turbo edition, and it's like magic, but it's just not flexible enough to be of much use to me. I do a lot of inpainting and use regional prompter and openpose to create specific scenes. I love how Dreamshaper still understands styles, it seems like one of the few checkpoints that is still flexible in this aspect.

    LyrinamiJan 8, 2024

    Inpainting doesn't really work that well with SDXL checkpoints, even if you used a non-turbo version. I still use the 1.5 inpainting DreamShaper checkpoints for this.

    Lykon
    Author
    Jan 10, 2024

    there is no point in updating the non-turbo version. It's super old and outdated at this point.

    steamrickJan 17, 2024

    @Lyrinami If inpainting hands or feet, try putting the relevant limb in the negative prompt and leaving the positive empty. It boggles my mind, but it effin works on SDXL.

    LyrinamiJan 18, 2024

    @steamrick Will try, thx for letting me know!

    blugailJan 27, 2024

    @Lykon I still love Dreamshaper 8, so thank you for that. It and deliberateV_11 both have almost magical results when I do a eulara pass in img2img.

    johnriley0003776Feb 13, 2024

    I personally doesnt have any problems with inpainting on this model.

    Works perfect,but only with Foocus,in Automatic1111 not so great,probably because of different inpainting engine.

    I try it in many crazy situations,adding all kinds of stuff,improving resolutions and details on faces and even hands,doesnt have any issues whatsoever.

    With ComfyUI i dont have many experience so i cant tell.

    From my personal perspective model is highly flexible in any situations,from fantasy to realism,i rendered logos,cartoon stuff,comics and SF themed images,and shines in everything.Hands are also almost perfect in any situations so i dont have may cases where i really need to inpaint them for wrong anatomy,i mostly inpaint them for adding details and skin features.

    Hope this helps.

    Lykon
    Author
    Feb 15, 2024

    @johnriley0003776 I personally tested it with Fooocus and it works. Unfortunately Fooocus alters the model a bit, changing colors and applying a more realistic style. It kind of damages anime a bit too.

    There are other ways to have an XL model inpaint, namely a controlnet trained on inpainting, which are more effective. I don't think there is one publicly released yet.

    johnriley0003776Feb 17, 2024

    @Lykon In general i always turn off all styles in fooocus,even Foocus V2 features because like you said it can damage the images and alter the model.

    Im using manual prompting without any of the Fooocus styles,because this is best way to fine tune the model for personal taste and preferences.But for people who are totally new in SD many of the fooocus fetaures can help in great way.

    For experienced users i would always suggest to tick off all the boxes and write the prompt in the same way like in automatic1111.

    francishsw891Jan 5, 2024
    CivitAI

    May I ask can the model be used with WebUI in Colab notebook or use it here in Civitai web site?

    Lykon
    Author
    Jan 10, 2024

    @Maxfield

    bosquewaves474Jan 7, 2024
    CivitAI

    I use DPM++ SDE Karras, 4-7 steps, CFG 2, and i have worse results than with Dreamshaper sd 1.5. Don't understand what i am doing wrong?

    LyrinamiJan 8, 2024· 1 reaction

    You should either use DPM++ SDE or DPM++ SDE Karras for best results. Besides, make sure you use the SDXL VAE, not the SD 1.5 version. I think the VAE is already baked into this checkpoint, so you might as well choose none.

    Lykon
    Author
    Jan 10, 2024

    resolution?

    yukka69Jan 17, 2024

    are you using hires.fix?

    unrealreinhardtJan 8, 2024· 2 reactions
    CivitAI

    Is this model available on Hugging Face? I can use it with Diffusers.

    LyrinamiJan 8, 2024· 1 reaction

    Yes, you can run it with the Diffusers library: https://huggingface.co/Lykon/dreamshaper-xl-turbo

    diffusionallusion172Jan 10, 2024
    CivitAI

    Would it be possible to distil this model like Segmind SSD-1B to make it easier to run on lower end GPUs? Segmind SSD-1B - v1.0 | Stable Diffusion Checkpoint | Civitai

    Lykon
    Author
    Jan 10, 2024· 1 reaction

    I'll look into that

    Av414ncheJan 13, 2024
    CivitAI

    But can this be used as a refiner?

    Lykon
    Author
    Jan 17, 2024· 1 reaction

    as long as you set sampler/steps/cfg correctly, yes

    Av414ncheJan 24, 2024

    @Lykon  I use Fooocus, so it's only the Refiner Switch slider

    kirassJan 14, 2024· 1 reaction
    CivitAI

    is dreamshaper model able to be integrated into an ios app, but self locally hosted?

    Lykon
    Author
    Jan 17, 2024

    for commercial use you need to get Turbo/SVD license from Stability for Turbo.

    bennykill709Jan 17, 2024· 1 reaction
    CivitAI

    "Turbo currently cannot be used commercially unless you get permission from StabilityAI. You can still sell images normally."

    This is confusing to me... are you able to sell images or not?

    my question is... how the F someone can tell you "this is made with my checkpoint or with stable diffusion you cant sell" after editing a photo with another software all the metadata is gone... so...

    Lykon
    Author
    Jan 17, 2024· 1 reaction

    as long as images don't violate copyright, you can

    enochianborgJan 20, 2024

    I hope to solve some of the confusion on this subject. They mean companies such as Deviant Art which have their own AI-Generators that use Stable Diffusion. Stability AI etc... the Checkpoint cannot be integrated commercially into their system without the permission of Stability AI.. You however, can use it locally and then your "non-copyright violating images" can be sold by you on other sites etc... such as on DA.... within their rules of course... I hope this cures some of the wonder left by the statement about commercial use. :) P.S. Thank you to Lykon, I have been very pleased with many of the checkpoints you've made!!!

    EricRollei21Jan 22, 2024

    @enochianborg "non-copyright violating images" are ones that don't have some brand's product in them? Or is it more encompassing - such as looking similar to some photographer or artist's work?

    enochianborgFeb 5, 2024

    @EricRollei21 Well lets say for instance you make a render of "Tom Cruise" that has his likeness... that would violate his "Tom Cruise" rights if you were to "sell" that image without conscent. Looking similar to someones style is commonplace.... pretty much all waifu art is the same style. But there is absolutely NO photography style that is copyright protected. Photography styles have been used and taught in college for years, such that as long as you are not claiming a particular photographer's name or company as if they had endorsed or had a part in it's creation... for instance Pixar or Disney. You could do fan art of Disney's Elsa and post it publicly but NOT sell it for profit. So long that you disclaim that it was "Fan art" and you are not paid for it's exhibition. I have created mature AI generations of a famous persons but I cannot sell them because I do not have permission from the person or their agency which controls their image likeness. Such as Megan Fox, I can post them as "Fan art" but I haven't done this as of yet because I'm not sure It won't get me on her naughty list.... The OP is about the subject of injecting the "TURBO MODEL" into Commercial software that generates A.I. images outside of individual use scenarios to be more separated from what content you can sell as far as your own creations. Like apples and oranges in a way.

    hben35096800Jan 17, 2024· 5 reactions
    CivitAI

    The Turbo version is really fast and has a high image quality, but when using ControlNet, the image quality drops significantly compared to other SDXL models. Is there anything I need to be aware of when using ControlNet, or will you be making a regular XL model of the same quality?

    Lykon
    Author
    Jan 17, 2024

    nope, it should work well

    HollowAbsenceJan 22, 2024

    What is your resolution ? I get great result for vertical image but 16:9 horizontal is a lot less detailed and sharp.

    mckenzief949618c468Feb 8, 2024· 2 reactions

    I'm also seeing quality decline with ControlNet (mostly in the form of blurring)

    Lykon
    Author
    Feb 15, 2024

    @mckenzief949618c468 use better controlnet models. We use this for work with all sorts of cnet workflows and we noticed no decline in quality. It's just placebo effect due to the fact that "it's different", but there is nothing in the Turbo process that makes cnet perform less well. Turbo is just adversarial training to use less steps and cfg.

    Checkpoint
    SDXL Turbo

    Details

    Downloads
    78,015
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    12/5/2023
    Updated
    4/27/2026
    Deleted
    10/16/2025

    Files

    Available On (3 platforms)

    Same model published on other platforms. May have additional downloads or version variants.