Use activation token analog style at the start of your prompt to incite the effect.
Results are much better using hires fix, especially on faces.
HuggingFace link -This is a dreambooth model trained on a diverse set of analog photographs.
You may need to use the words blur haze naked in your negative prompts. My dataset did not include any NSFW material but the model seems to be pretty horny. Note that using blur and haze in your negative prompt can give a sharper image but also a less pronounced analog film effect. - Trained from 1.5 with VAE.
Here's a link to non-cherrypicked batches.
Description
FAQ
Comments (17)
I love the look and feel of this model but unfortunately I do get heavily garbled faces in 95% of the cases. It works better (more often) with close-up portraits and when it works the results are amazing. Might there be something I'm doing wrong? Does the model maybe work better with non-square formats or specific schedulers? I'm using the CoreML version of the model, might that maybe be the problem?
Same here. I got 1-2 proper images out of 100 pictures that I created with it.
same here. struggling as hell to get a decent face. what sampler do you recommend? i get very different results just by switching the sampler
This is a style model - I would try generating your desired image, then running it through this model in img2img with low denoise.
Can you upload a ckpt version?
There already is a ckpt version my man.
@wavymulder where can we get it ?
@vikkkoo if you see in the versions tab, one is labelled 1.0-safetensors and the other is just labelled 1.0. That's the ckpt version.
@wavymulder thanks for your reply
The results I get aren't nearly as good as the examples shown. Half of them come out as illustrations. Even using the same prompt and settings recorded for the examples, the outputs I got were dramatically worse.
Was this model updated and changed?
Nope, the model is still the same as it's always been and still works the same for me.
Auto recently changed how Hi-res fix works, tho. IDK what it does now, I haven't pulled that update.
Poor in city view. Only aerial view🤷
This is probably the best model to generate portrait that are very realistic, almost indistinguishable from real photographs, although the downside that it is really hard to generate a more fantastical or weird photos, and also generating people farther than a close up photo is often result in garbled face.
May I know the trained photograph for this model, from what kind of film stock are they? They seems to have the same vibe. Also, is there a plan to make a Lora out of this for Stable Diffusion XL?
Sorry for the very late reply.
The dataset has a very wide variety of film stock, lots of different types.
There will be a SDXL LoRA available soon.
The model looks good, but I'm not completely in love with it.
This janky ass SD1.5 model is still my favourite of all time
the good old days
Details
Files
analogDiffusion_10Safetensors.safetensors
Mirrors
analogDiffusion_10Safetensors.safetensors
analog-diffusion.safetensors
analog-diffusion.safetensors
analog-diffusion-1.0.safetensors
analog-diffusion-1.0.safetensors
analogDiffusion_10Safetensors.safetensors
analogDiffusion_10Safetensors.safetensors
analog-diffusion-1.0.safetensors
analog-diffusion-1.0.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
















