I have tried to make a merge that does not rely on lora,lycoris, or negative embeddings
First batch of sample pictures have been generated using random prompts from other models on here.
The second batch is more adult orientated using some loras
the 3rd batch will have a whole bunch of negative embeddings
clip skip 2 is the test range ( haven't tried 1,3,4 upwards.
i have had negative comments in the past about my posts using "help" with loras and embedding etc so i wanted to show what it can do with and without them.
Have fun and please show me some of your creations
V2.0
i have altered the V1.0 version incorporating new wieghts and a few other photorealism checkpoints
Description
Focusing more on Photo Realistic Portraits in this version , im sure it can do other things , ive just not had the time to try
FAQ
Comments (21)
Definitely Top 5 of the best realism models on CivitAi. Well done!
Hi. Can you guess why I get absolutely black images with V3?
sorry i have no clue , is it a1111 or comyui your using?
@GG_System ComfyUi. I can send you the workflow if you want.
@aiden_soler really sorry , i am less than a novice with comfyui , to the point i uninistalled it and only use a1111 . @everyone if you see this can someone with experience help out please
@aiden_soler a quick goodle search says something about the latest xformers and to use 0.0.17 , not sure if its the root cause but may be worth a look
I'm also getting the occasional black image and NaN errors when using this model but in A1111. Very strange. Not using xformers either.
@focalbluebell extremely odd , I've created over 100 images and not had one error . I'll see if I can look into it.. https://www.reddit.com/r/StableDiffusion/comments/13g1yqu/a_possible_fix_for_the_nan_error/?one_tap=true
V3.0 seems to be broken. Model toolkit reports that it is missing clip.
Model is 1.99 GB. Multiple model types identified: UNET-v1-BROKEN, VAE-v1-BROKEN. Model type UNET-v1-BROKEN will be used. Model components are: UNET-v1-SD. Contains 394.28 MB of junk data!I've been seeing several checkpoints lately with this problem. Not sure what is causing it.
This checkpoint will work if you load another checkpoint (without missing clip) before it and then load this checkpoint. The problem is that the output will change depending on which checkpoint is loaded before.
thanks for the heads up , its strange because i merged this with supermerger , all 3 models are fine when checked with toolkit , i will try it with the normal checkpoint merger in a1111 and see what happens
@GG_System I've been seeing this a lot more recently and not sure what causes this. Here is another example checkpoint that Model Toolkit reports as broken. (This one isn't missing clip so webui manages to load it properly without any prior good model loaded).
https://civitai.com/models/160268/or25d
If your checkpoint was known to be good before you uploaded, could it be something during the upload to Civit AI that causes this?
Did you fix it yet?
Same issue for me as well.
If anyone has black image output, download modelfp16 from the link"runwayml/stable-diffusion-v1-5 at main (huggingface.co) and use it in clip loader by placing the model in clip folder (not clip Vision) in your comfyui model folder. As for A1111, using a working model after initial startup of A1111 to generate an image and then using this model afterwards works.
pretty cool model
come si utiliza su confyui
it wont work in comfui , i had not noticed when i made the model that something broke
@GG_System ok thanks :)
CLIP in the model is broken. Use CLIP from anothe rmodel (such as V2).
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



















