Read Description
Like photorealism? Try my new fine-tune 'Lomostyle'
TRIGGER WORDS ARE NOT REQUIRED
Trigger words are optional. I find they work best at the beginning of the prompt.
Known Trigger Words : "bldrnrst", "analog style", "synthwave", "snthwve style", "sci-fi", "postapocalypse", "nsfw", "sfw", "erotic", "erotica", "3d render"
Note: "nsfw", "erotic", and "erotica" can be placed into your negative prompt to generate SFW results.
There are likely more trigger words, experiment and share your findings!
Merged Models
A list of merged models can be found bellow in the description of the attached model version.
Check out my other models
SDXL
Boomer Art Model - https://civarchive.com/models/163139/boomer-art-model-bam
SD1.5
Doomer Boomer - https://civarchive.com/models/118247?modelVersionId=128239
Lomostyle - https://civarchive.com/models/109923/lomostyle
Based Model - https://civarchive.com/models/83991?modelVersionId=89262
Electric Eden - https://civarchive.com/models/64355/electric-eden
Cine Diffusion - https://civarchive.com/models/50000/cine-diffusion
Project AIO - https://civarchive.com/models/18428/project-aio
WonderMix - https://civarchive.com/models/15666/wondermix
Experience - https://civarchive.com/models/5952/experience
Elegance - https://civarchive.com/models/5564/elegance
LoRA
Pant Pull Down - https://civarchive.com/models/11126/pant-pull-down-lora
Questions or Feedback?
Description
VisionGen - Realism v1.6
Complete remake of the orignal VisionGen_v1.0 model
VAE not required
Trigger Words not required
Models used in merge
Realistic_Vision V1.4 - https://huggingface.co/SG161222/Realistic_Vision_V1.4/tree/main
UnstablePhotoRealv.5 - https://civitai.com/models/3753/unstablephotorealv5
Vintedois-Diffusion - https://huggingface.co/22h/vintedois-diffusion-v0-1
Unvail AI 3DKX v2 - https://civitai.com/models/2504/unvail-ai-3dkx-v2
NeverEnding Dream - https://civitai.com/models/10028/neverending-dream-ned
Blade-runner-2049-v1 - https://huggingface.co/wimvanhenden/blade-runner-2049-v1
Liberty - https://civitai.com/models/5935/liberty
Fine Tuning
FAQ
Comments (19)
Sorry im a noob, I know what inpainting is but what is the separate inpainting model for? Thank you.
Having fun with Reborn so far
It's alright!
The inpainting model is to be used for inpainting in Automatic1111's webui.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#Inpainting
In short, you draw a mask over the object/subject you want to change/remove. You then type what you want to change the object/subject to in the positive prompt.
For example, let's say you have an image of a woman wearing a green shirt, but you want her to be wearing a green shirt:
You would paint a mask over her green shirt, and in the positive prompt write something like "A photo of a woman wearing a BLUE shirt". The model will then attempt to change her shirt color to blue. If the model makes too drastic of changes; try lowering the "Denoising Strength" slider.
Hope this helped!
@ndimensional Oh no I know about inpainting, but what I meant is we can already inpaint by default, so why is there a separate inpainting version? Thats what im wondering!
@TheCAL Custom inpainting models will have different capabilities than the default SD inpainting model. Such as texture, lighting, anatomy, ect.. It's also useful to have an inpainting version of the main model you're using in the event that you want to fix a generation with inpainting, this maintains consistency. Think of it as taking the "style" of the main model and applying that to the default SD inpainting model.
@ndimensional Oh I see, I Imagined it meant something like that!
what does bldrnrst mean and or do
It's the trigger for the blade-runner model that was used in the merge.
https://huggingface.co/wimvanhenden/blade-runner-2049-v1
It tends to add a cinematic-like quality to the generation.
@ndimensional ooh ok thanks gotta remember to try
@ndimensional ahh cuz i was like wtf is this mean I have never seen it n e where else
Does the yaml file you use matter? Don't know much about them.
I have the default SD one (v1-inference.yaml)
thanks
It can, but I use v1-inference as well so you should be good.
im having a real hard time getting full body pictures with this model I'm putting (((full body))) in the prompt and never getting full body any idea why
Where at in the prompt are you putting the full body?
I usually put it early in the prompt : photo, front shot, full body, of <subject>, ect..
There might also be something else in the prompt that's forcing the model to generate portraits, such as;portrait. Using tags like standing, walking, running, ect.. could also help with full body shots. Sometimes if there's a lot of tags that focus on the upper torse/face (especially breasts), the output will focus on that part of the body.
You could also try removing any trigger words if they're in use, and see if that has any effect.
It may be useful to go over your negative prompts as well, and there's anything in there that force closeup, medium closeup shots.
Lastly, but possibly most importantly, would be aspect ratio. Setting your image dimensions so you have a higher height value than width can drastically change the models output. I think I used a combination of 512x768 and 570x760 for the non-1:1 aspect ratio sample images.
This might seem like a lot but I never ran into this problem with the model so I wanted to layout every possible solution I could think of.
Hopefully this helps, let me know how you make out!
@ndimensional I fixed the issue for some reason Controlnet wasn't working I had to reset it now works like a charm
Could you help me finding "VisionGen_1.6-fp16" model? Thanks !
@ndimensional Thank you very much
Is it my setup or does this model randomly throw in apples or apple trees? I can't seem to avoid them even if I use "apples" or "apple trees" in the negative prompts.
do a lora version of your models too pls
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.








