Source: https://huggingface.co/city96/stable-diffusion-3.5-large-gguf from city96
This is a direct GGUF conversion of stabilityai/stable-diffusion-3.5-large
As this is a quantized model not a finetune, all the same restrictions/original license terms still apply. The model files can be used with the ComfyUI-GGUF custom node.
Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.
Please refer to this chart for a basic overview of quantization types.
☕ Buy me a coffee: https://ko-fi.com/ralfingerai
🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb
Description
FAQ
Comments (6)
Whats the difference between these and the other sd3.5 ggufs? https://civitai.com/models/879251/stable-diffusion-35-large-gguf?modelVersionId=985076
Can you do this but for 3.5 Medium? Thanks
Big thanks for this. What would be the VRAM requirements for a Q4, and how much does it differ from the non-quantized in quality? Is it noticeable?
Are Q6K too large for an 8GB card?
having trouble with the workflow on this, any links?
"ValueError: Failed to recognize model type!"
Error when using ForgeUI, model does not work.