Source https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main by city96
This is a direct GGUF conversion of Flux.1-dev. As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.
The model files can be used with the ComfyUI-GGUF custom node.
Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.
Also working with Forge since the latest commit!
Description
FAQ
Comments (17)
I can run this on an RTX 4070 12GB, but somehow my PC can't get the gguf file into a ZIP 馃槹
The problem was it crashed on 90% because my harddrive was full, it's fixed now!
Removed the ZIP, cause ppl find it sus, will try again tonight or maybe someone can put the right file.
Lol, 62 people downloaded an empty zip file, gonna try to pack it one more time on another harddrive, so it at least has the right file! 馃挬
dude pls can you reupload the main file on here. i'm using a cloud comfy server and can only upload models from civit nowhere else
@discodiffuse聽I am repacking it right now, it takes 30mins on a 12600K
@discodiffuse聽fixed!
Ok it should be on! 馃憣馃挄
GGUF is more friendly with low spec pcs right?
True, but this one is the heaviest one, my 4070 can barely do it, I recommend to get the Q versions that Ralfinger posted for lower cards. Link is in Suggested Resources.
I also have a 4070 O.O
@YeiYeiArt聽Q8 is perfect for 4070 i think
@JayNL聽But TI with 16gb?
Well I'm kinda done testing F16, it works and sometimes the result is better, but it takes the max out of my PC, so I can't do anything else, prob will go Q8! 馃憣
















