CivArchive
    GGUF (Flux) workflow v2.0 - v1.0
    NSFW
    Preview 24585859
    Preview 24570648
    Preview 24571963
    Preview 24586921
    Preview 24652494
    Preview 24790880
    Preview 24791022
    Preview 24793279
    Preview 24795828
    Preview 24794637

    --- v2.0 has 3 LoRA spots, Save Image with Metadata, Optional Upscale and just More Neat than the quick share ---

    Quick share for those who ask in comments, it's about this model (used Q8)
    https://civarchive.com/models/647237?modelVersionId=724149

    Can also be used with F16, better quality than Q8 but harder on your PC
    https://civarchive.com/models/662958/flux1-dev-gguf-f16

    GGUF models direct downloads here
    https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main

    馃槀 the provided prompt is a parody on the outcome of the Google Search meaning of Lada 馃槀

    If Q8 is on the edge of your VRAM, you can also use Q6_K which is smaller and almost the same quality!

    Description

    FAQ

    Comments (16)

    fuinypainAug 16, 2024
    CivitAI

    i m new in confy whtat is gguf??

    JayNL
    Author
    Aug 16, 20242 reactions

    I have no idea, just found it 30 minutes ago, but it works, put the file in your unet folder and go. You do need the custom node though, should add it.

    DominoUBAug 16, 20244 reactions

    It's a file format used for models, but the main thing is the gguf models are quantized to be just as small as the nf4 ones but much higher quality. Q8_0 will produce nearly identical results to the full FP16 model but will fit into 16GB VRAM, and Q4_0 is cheaper and better looking than fp8 while fitting into 8GB VRAM.

    Better compression basically.

    EepolAug 16, 20242 reactions
    CivitAI

    How much iteration/sec on 8gb VRAM ? 40s/it on 4060Ti/32 gb ram

    JayNL
    Author
    Aug 17, 20241 reaction

    I don't know, I do 5.5s/it on 12GB

    EepolAug 17, 2024

    @JayNL聽40s/it comfyui

    JayNL
    Author
    Aug 21, 2024

    @Eepol聽that's a massive difference with a 4070 12GB

    EepolAug 21, 2024

    @JayNL聽in forge 2.9 it/s

    JayNL
    Author
    Aug 21, 2024

    @Eepol聽that sounds more logical with 16GB, guess your settings in Comfy were not ok

    radirajjjOct 2, 20241 reaction

    I'm also using 8GB VRAM, so we shouldn't ask how many iterations/ second; we should ask how many seconds/per iteration.

    EepolAug 17, 2024
    CivitAI

    is forge than comfyui faster on flux ?

    JayNL
    Author
    Aug 17, 2024

    I don't know forge

    EepolAug 17, 2024

    @JayNL聽Me too but people are saying Flux much faster on forge i don't wanna waste more time that's why am asking :D :)

    GodAlMightyAug 18, 2024
    CivitAI

    I'm on mac. Theoretically, it should be faster but i'm not seeing any speed difference even with the q4. What am i doing wrong?

    curzon739Aug 19, 2024

    I experienced that as well - I am on a 16GB 4060Ti (So comparing Q4_0 to fp8 dev gives roughly the same performance)

    JayNL
    Author
    Aug 21, 2024

    @curzon739聽even F16 is kinda the same speed as Q8, it's more how much VRAM they use which results in a quality difference, not difference in speed.

    Workflows
    Flux.1 D

    Details

    Downloads
    508
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/16/2024
    Updated
    5/12/2026
    Deleted
    -

    Files

    ggufFluxWorkflowV20_v10.zip

    Mirrors

    CivitAI (1 mirrors)