Read Description
Important: Having multiple models uploaded here on civitai has made it difficult for me to respond to each and every comment. One of the ways I plan on addressing this is via the creation of a pdf guide for each and every model (think of it as the models documentation). This will take a while though. So in the meantime, if you have any questions or feedback -
visit my thread of the Unstable Diffusion Discord
Like photorealism? Try my new fine-tune 'Lomostyle'
VAE Required
Download VAE - https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main
|──sd
|____|──stable-diffusion-webui
|____|____|──models
|____|____|____|──VAE
|____|____|____|____|──<Put your VAE file here>

Cine Diffusion - Fine tuned on cinematic stills and 35mm film.
Note: Description is limited atm. Full PDF documentation is in the works.
Have you heard of the high elves?
If you're attempting to generate an image of a elf and aren't seeing pointy ears:
Set your CFG to 7+
Make sure
elfis closer towards the beginning of the prompt.
Warning - This model is a bit horny at times. I tried to alleviate this by fine tuning the text-encoder using the class nsfw and sfw. Although this solution is not perfect. More experimentation is needed. I've had most success with adding sfw or nsfw within the first 75 tokens of the prompt. Adding sfw or nsfw to the negative prompt can also help.
Settings
Clipskip : 1
ETA Noise Seed Delta : 31337
CFG : 6 - 8
Sampler recommendations :
DPM++ SDE Karras, 18-40 steps
Euler a, 20-32 steps
DPM++ 2M Karras, 30 - 70 steps
Heun, 20 -30 steps
Euler, 20 - 70 steps, Sigma Churn = 1
These parameters are not strictly required, experiment around with other samplers and parameter values. You might find something that works better for you.
Textual Inversion (Negative) used in sample images :
Bad_prompts_version2 - https://huggingface.co/datasets/Nerfgun3/bad_prompt/blob/main/bad_prompt_version2.pt
Install location -
stable-diffusion-webui -> embeddings -> <place .pt embedding here>
Check out my other models
SDXL
Boomer Art Model - https://civarchive.com/models/163139/boomer-art-model-bam
SD1.5
Doomer Boomer - https://civarchive.com/models/118247?modelVersionId=128239
Lomostyle - https://civarchive.com/models/109923/lomostyle
Based Model - https://civarchive.com/models/83991?modelVersionId=89262
Electric Eden - https://civarchive.com/models/64355/electric-eden
Project AIO - https://civarchive.com/models/18428/project-aio
WonderMix - https://civarchive.com/models/15666/wondermix
Experience - https://civarchive.com/models/5952/experience
Elegance - https://civarchive.com/models/5564/elegance
VisionGen - Realism Reborn -https://civarchive.com/models/4834/visiongen-realism
LoRA
Pant Pull Down - https://civarchive.com/models/11126/pant-pull-down-lora
Description
Cine_Diffusion_v3
Baseline model consists of :
base_2 (trained for anatomy)
HDR1 (trained model for lighting/texture)
Cine_Diffusion_v1 (First attempt at this model that was based on SD 1.5)
Movie Diffusion - https://civitai.com/models/8067/movie-diffusion-v12
Training Data :
Most of the cinematic film stills were obtained via https://film-grab.com/
35mm stills were largely obtained via https://www.lomography.com/films
Extra training:
Text-encoder-only training for nsfw and sfw triggers.
More info + training logs and sample data will be available once the documentation is uploaded.
FAQ
Comments (17)
I'm curious, was your checkpoint trained from SD 1.5 or did it start using a different model? I'm not a big fan of a lot of merges as all the faces and body proportions resembles each other's. In general, I really like your models though.
The first attempt (which actually ended up in the new baseline model) was trained on SD-1.5. It worked well but was almost "too" cinematic and lacked some of the variety I wanted. I may do a bit more training on the first version and release it separately in the future.
This version (v3) has a merged baseline model that I fine tuned on top of. You can find more info on the baseline model in the "About this version" section.
For cinema, at least, the 16:9 format is needed... There is no cinema in this model.
16:9 is mostly a TV format, Cinema is more between 1.85:1 and 21:9.
I don't agree that there is "no cinema"... but I do see what you mean. I think the aspect ratio, at least as it currently is... is simply a limitation of where we're at with Stable Diffusion as a whole. Those sort of aspect ratios can certain be achieved, but requires lots of extra attention and usually implementation of certain settings and plugins.
I think though that while the aspect ratio is noticeably not truly "cinema" format.... the overall aesthetic and feel seems right on the money, so I do believe our hard working friend @ndimensional here has put in place a good tool to start with to achieve great results, if the user just goes the extra mile and does the rest of what's necessary if they want to create authentic cinema generations.
That's because of a limitation training with SD 1.5. I've debated making a similar model with SD 2.x but decided to put it off because A.) SD 2.x is terrible with anatomy. B.) SD XL is likely going to be released publicly in the near future, and I see more opportunity to fine tune the latest version for this purpose. If it's even possible with my current setup. I'm training on one 3080ti. Training on a 1.85:1 or 21:9 (the actual cinematic ratios, not 16:9) dataset would be incredibly demanding for one little 3080ti. Even 16:9 would be computationally dense.
This is why most of the sample images are upscaled (using Hires fix) at 4:3. 4:3 was the standard for film up until the 50's and remained in use throughout the 20th century for TV. Most importantly, it's well within SD 1.5s capabilities to generate images at 4:3.
@Balthazar99 Thanks, you nailed it. My intentions was very much to make a model that captured the aesthetic of cinema/film (mainly lighting/texture. I admit the composition leads a bit to be desired and will be something I work on in a future update.). Not recreate authentic film stills. Granted, that is something I'd like to do in the future. It's just not possible for me at the moment with my current setup.
@frosiakolobanova306 Try https://civitai.com/models/3091/cinematic-diffusion
It's more of what you're looking for.
@ndimensional Raised the rating to 5 stars. Added my work to this branch. All sizes are made for cinema. But, this is on my model.
Oh it's horny allright! (not that there is anything wrong with that) admit it op you trained it on parody porn movies :D
Thanks for the suggestion... wait that was a suggestion right? 🤣
This looks really amazing so far. I think the realism in the model is really impressive too, the training must have been pretty solid in that regard, if I'm interpreting the training you did correctly. Awesome, can't wait to check it out.
And of course, I'm excited to read your additional documentation once it's completed as well. If it was only models alone, you're already one of my favorite creators on the site... but then it takes it to a whole other level when you add in how much I love hearing you talk and discuss the details about some of the aspects of Stable Diffusion, both the simpler concepts as well as the more in depth stuff.
Thank you for all you do @ndimensional, keep up the great work!
Great model, very good for photorealism. Keep up the good work!
Overall I'm fairly impressed by this model. My only comment is that if you use the word "superhero" it really likes to use the "Superman S"
Do you have a non-pruned version available by chance? (for better merging, just trying to make a private merged version and while I am having great success with this one, I'm just asking...)
Amazing and underrated model but it keeps making NSFW stuff even though I already put any possible NSFW stuff in the negative prompt.
if there is any suggestion I appreciate it.
Pure masterpiece. Nothing come closer to this model in realism. Not just characters body. Background is stunning. Clothing and style. Exceptional output.
Is it possible for u to make a prompt guide for this mod?








