NeverEnding Dream (a.k.a. NED)
This is a dream that you will never want to wake up from
Add a ❤️ to receive future updates. This took much time and effort, please be supportive 🫂
Do you like what I do? Consider supporting me on Patreon 🅿️ or feel free to buy me a coffee ☕
V1.22 update: thanks to Bokus for some of the preview images and the tests.
Available on the following websites with GPU acceleration:
It has been a while since I made a model. I've been making LoRA's non stop for the last month, but I felt like I needed something. A big problem with DreamShaper is that it wasn't much compatibles with the LoRA's I was making. So I wanted to create a model that was "artistic" like DreamShaper but also able to use my LoRA's and booru tags for generating images. Plus I wanted this to be able to make good anime stuff out of the box, without needing an additional LoRA.
After a lot of tests and also fine tuning (plus LoRA merges) here is my newest model.
It came out a bit more realistic then I wanted, but I won't complain.
What is this model great at?
Generating cosplay images
Generating anime pictures
Work accurately with character LoRA
Generating good looking people
Generating realistic animals
Generating images using booru-like tags
What is this model not too good at?
Generating pictures from complex sentences
Making fantasy/sci-fi paintings
So this basically complements DreamShaper, and it also doesn't include Dreamlike in the mix.
I hope you'll enjoy it!
Some tips:
Use CLIP skip 2.
Select "auto" as vae if you're using the baked vae version.
Use highres fix or img2img to upscale the images after you get a preview. See the examples generation data for settings suggestions, I've tested various techniques.
If you're making images where the subject is far, remember you can inpaint eyes and faces selecting "only masked".
Stable Diffusion is great, have fun with it!
Description
About this version
Not for training or mixing. This is only for inpainting and outpainting.
VAE is baked in.
FAQ
Comments (50)
Would you say this would be an upgrade compared to DreamShaper or would it be best to use both in tandem? What are the advantages of one versus the other in your own preference? Cheers and thank you for another epic model.
they're Thanks for the question. To me they're vastly different. DreamShaper is trained on captions and is more tailored towards paintings and artworks. NED is trained on booru tags and is better for semi-realistic stuff, porn and anime. I think DreamShaper has higher hights, but this is better for otakus and sexy stuff. With the right prompts NED can also get pretty much photorealistic, while DS purposedly avoids that. DS also makes the best backgrounds ever, hard to beat for any model.
@Lykon thank you so much for in-depth response, I'll definitely take a go at this one and see what I can make of it :) Cheers!
Does the inpainting version have baked vae?
yes.
@Lykon Thanks :)
Question for a still learning n00b here - would it be possible to train on top of this or is this a preference you're not OK with - I'm still researching some advanced methods beyond what i've learned and wanted to build a model based on certain things, and noting that 1.5 base is a nightmare sometimes.
you can very much train on top of this. And some people are already doing it.
@Lykon Sweet, as soon as i'm done TESTING the base 1.5 of the new model i'm doing now i might re-train it on this, I adored Dreamshaper, and i love your style for this.
@duskfallcrew I'm training my next LoRA on NED and comparing it to the same settings on Any4.5 and nai. It's turning out superior.
@Lykon I was TRYING to train it via the fast diffusion - but it wasn't liking the file for some reason, I DO still want to do a test train on yours because i might do a three fold merge with my working portrait models - but the thing won't "CONVERT" even the fp16, and i tried to convert to normal checkpoint myself and use that but it didn't work ;__;
AND ON TOP OF THAT it wouldn't be your fault it's just that the notebook i was using wasn't liking it, so if you have any suggestions let me know :3 (I can't train locally my mac hates me)
@duskfallcrew I finished training it and posted. It's the realistic Ganyu. I've also reposted one of the outputs here on the NED page.
@Lykon Sweet, i need to pick your brain probably somehow on why the model wasn't working to train on XD
Wow, looks great!
pruned ckpt with baked VAE, plz
none of the model hashes match the ones from the sample pictures you posted. Are you sure the samples were made using the same exact models?
They were using the noVae version (eith the same vae as the baked one but selected manually) and the hashed do match with that one. See the discussion inside the Nanakusa example.
Question. All of my renders are coming out washed out using the same prompts and settings you have here. I'm using the baked VAE pruned version of NED. I've tried clip skip 1 & 2, no VAE, auto VAE. The images still look really nice, but come out in a dreamy, unsaturated art style. Any suggestions? Great model by the way. If I can fix this issue, I will be super happy. :)
you probably have a vae selected in your settings. Use auto in settings + the baked vae version.
everyone needs to contribute to buying this man a new 4090
https://www.buymeacoffee.com/lykon/wishlist
Dam Straight. Even 3 !
Using the no-baked-vae version, which external VAE do you recommend ?
NED!!!!!
https://i.imgur.com/Bfn35wp.png
I have tried my best, but unfortunately, there is a significant gap between me and the developer
First of all, this model is amazing. Creates amazing images with good quality.
The only problem I'm having is getting a different face. The checkpoint tends to recreate a similar face every time (like a young/childlike face). Tried with "mature face" prompt and negative "child, young) and still creates that similar face.
Still working on the propmts trying to get a more mature face on the models.
sameface is a problem with most SD models. I found that using loras and embeddings easily patches that, but you could also use age ranges inside the prompts.
Hey Lykon, love your work and stuff. I saw since you just posted new versions on Dreamshaper, was wondering if you had any plans to continue working on Neverending Dream also? I love them both!
Of course. They're different models and I'll keep supporting both. I'm already working on a new version of NED
Any chance to see a pruned CKPT version with VAE? Only this option is available for Diffusion Bee users on Intel Mac
Pruned with baked vae is a bit odd. But I'll see what I can do.
help. using the baked vae version can't get the color to come out like the preview. I don't select any vae because I don't have any vae in the folder. so what's the problem?
you should select auto.
@Lykon its always auto by default
@pteranodon then I'm not sure. I'm using exactly that uhm.
I believe if you download vae-ft-560000-ema-pruned.ckpt or vae-ft-840000-ema-pruned.safetensors as alternative VAE should help
A great checkpoint, one consistent problem though is the belly button comes through relatively tight clothing a lot on this particular model.
Could be because of the "anime" parts of the mix. I'll try to address it in later versions
Baked VAE, should I turn the VAE off in webui and easy diffusion? Also, the inpainting version--does that mean I should use that version for img2img? Can it still be used for text2img or are they to be treated as totally separate?
Inpainting versions are for, well, inpainting, filling in the blanks and adapting to the parts of the image left untouched. Using them in img2img is mostly fine, but txt2image produces weird results.
Put VAE on automatic and it'll be using the baked VAE.
I found out that baked vae model and SD VAE ft-mse-840000-ema-pruned gives the best results. NEDbakedVAE + vae none or automatic gives me washed off results.
@Katsura9000 odd. Auto vae should give you exactly ft-mse.
Excuse me, but why did you correct the model name. I thought the new version, compared nothing new, did not find it.
Beautiful model, and my favorite! Just one question what upscaler do you usually use? I found out that latent adds blur, latent nearest exact seems to keep the image sharp but the colors seem washed off, R-ESRGAN 4x+anime6b seems to be somewhat in between, not too much blur but decent colors. Didn't really try any other, for reference I generate batch images at 512x768 then upscale to 896x1344, I didn't really bother to do img2img as it adds even more noise/blur imo.
Depends on the style. For Anime I use anime upscalers, for photos it depends on the result. I try Latent, Lanczos, Eragan General and even "none" sonetimes (img2img).
Details
Files
neverendingDreamNED_-inpainting.safetensors
Mirrors
10028_neverendingDreamNED_-inpainting.safetensors
neverendingDreamNED_-inpainting.safetensors
119_neverendingDreamNED_-inpainting.safetensors
NeverEndingDream_ft_mse-inpainting.inpainting.safetensors
NeverEndingDream_ft_mse-inpainting.inpainting.safetensors
NeverEndingDream_ft_mse-inpainting.inpainting.safetensors
neverendingDreamNED_-inpainting.safetensors
NeverEndingDream_ft_mse-inpainting.inpainting.safetensors
neverendingDreamNED_-inpainting.safetensors
neverendingDreamNED_-inpainting.safetensors
