A mix I am still testing, but since it has been requested over and over again, I decided to put it in anyway.
With the highres.fix it is stable even at higher resolutions. (not perfect!)
Is a pretty fancy mix.
If used well it has few flaws.
Play around with this Model by combining it with other embeddings, but be careful, I have had the best results by mixing multiple embeddings with little weight (example: WildStyle:0.2).
It's not very useful if you're going to do photorealism.
Having that said, I hope you enjoy it.
P.S. Sorry about the hands.
PP.S. For those who are trying to recreate my images by taking prompts from it, I warn you that it is impossible for the same image to come out, this is because of the Upscale method I use, so both the Steps and the CFG scale and the Sampler are different from those used to create the base image. SORRY. If you want, you can find me on Unstable Diffusion discord server https://discord.gg/unstablediffusion
Description
FAQ
Comments (31)
which upscaler did u use?
4xUltraSharp, here the download https://mega.nz/folder/qZRBmaIY#nIG8KyWFcGNTuMX_XNbJ_g place the .pth file into your ESREGAN folder
This model looks very good! Was it removed for some reason?
removed?
Click the arrow beside download to select it. For some reason it doesn't seem to be the default for download, however that works here. I've never uploaded a model so I don't know why it would do that.
My b for a moment even the dropdown arrow was giving me file not found problems
Would like to test but I can't use it as I'm using a DirectML SD version as I only have a 6900 XT.
I rly would love to use it as it doesn't seen to have that pale looking face issue most of the anime models have.
Any chance to get a ONNX model???
sorry but i'm not so expert on this things, i just merged some model and thats where my knowledge end, i don't even know what is ONNX.
I tried it but mine doesnt look like that good as yours :c any tips? Are you use any other this, or just an upscaler?
i I strongly recommend you to mix some embeddings
For testing I always like to see if I can generate similar images to the author, yet even with the embeds that you use—I cannot seem to get even remotely similar results. Its quite confusing really. I'll look more into it.
yeah me too, pleasr update us
added a PP.S. line: For those who are trying to recreate my images by taking prompts from it, I warn you that it is impossible for the same image to come out, this is because of the Upscale method I use, so both the Steps and the CFG scale and the Sampler are different from those used to create the base image. SORRY.
@wildkeeper Thanks very much for your answer. But this isn't only an upscaler problem.
For example, I tried to reproduce the 2 women in winter. On hundred trys, I never got something close for the kind of coat, but very simple coats with simple patterns.
I have the Samdoesart and the winter style embedding. Are some other embeddings necessary ?
@plukik Embeddings and lora's would be listed in the prompt, so it is not an embedding issue. Hypernetworks are a different story however.
I think wildkeeper probably does a lot of inpainting and has stated they do some custom upscaler mix. Everyone should be doing inpainting if you want any decent images outside portraits (you might already, but just stating that for anyone passing through).
If you haven't already, try inpainting one entire coat or woman and try both "Whole picture" and "Only masked" with different denoising strengths. (I find 0.5 works best for all checkpoints I've used, but do mess around if you have time. I also find Only masked is good for smaller areas while whole image is better for larger areas. This isn't always the case though.)
I wish the main upscaler worked for me, but it is completely borked on my end. I'll check in a few weeks if its updated and suddenly works lol. I'm gonna try the beta or some of the other versions and see if they work.
Also hardware does also affect image outcome, and in my experience, even load on the GPU can affect image output, so we will never ever have the exact same images when it comes to high res photos like this unfortunately.
Do you use any other embeddings or hypernetwork in your uploaded examples?
Appreciate if you have links.
My generated results differs alot in quality, what settings do you use for the upscaler?
Same question. I would also like to know what VAE were used.
There is definately a VAE or Hypernetwork being used on this. Even with the same embeds and scaler, results are far, far, far dramatically different for it just to be system differences.
Samdoesarts (original, one, and two) are the only embeds on the curated pics. Those can be found on this website under Textual Inversion and placed in that folder (or embeds depending on how your systems are set up)
The scaler is 4xUltraSharp (dl link provided by author) https://mega.nz/folder/qZRBmaIY#nIG8KyWFcGNTuMX_XNbJ_g
However its broken for me and throws an error on use within SD.
added PP.S. line: For those who are trying to recreate my images by taking prompts from it, I warn you that it is impossible for the same image to come out, this is because of the Upscale method I use, so both the Steps and the CFG scale and the Sampler are different from those used to create the base image. SORRY.
@wildkeeper Thank you very much for this information! Truly! I didn't mean to come off across as harsh—I do get nice images from this—but I was just curious (and admittedly frustrated at first until I found some settings that worked. I do really like this upload!
@wildkeeper
why your pictures say dreammix4 for the model and [email protected] is this different model of yours ?
Unless they were removed, Im not seeing any image with the model "[email protected]" Some of them do have the model name as "dreammix4" but some also have "Wild's Mix", I assume the reason is because the images were generated before the author renamed the model to its final name. If you look at the hash it is the same.... except for the slightly more realistic blue haired girl... whos model is "dreammix5(f22-sam0.4)" hash e4c0ecb3. Probably from a different mix that got thrown into this bunch?
can you add a safetensor file as well?
i can't sorry, i don't remember the mix, so i can't recreate it
@wildkeeper There are a number of tools that will let you convert your existing ckpt file, so just convert the one you uploaded. You dont need to remember what the mix was.
i'm working on it in case you still care
safetensor model added
@wildkeeper thanks ♥










