CivArchive
    FLUX.2 Dev GGUF (Low VRAM) (Simple) - v1.0
    Preview 125977566
    Preview 125977575
    Preview 125977582
    Preview 125977637

    FLUX.2 Dev GGUF Workflow for ComfyUI, tested on RTX 3060 (12GB)

    Main Diffusion Model (GGUF) Model: FLUX.2-dev-gguf Download:

    https://huggingface.co/city96/FLUX.2-dev-gguf

    Put it here: ComfyUI/models/diffusion_models/

    Note: Choose the quantization that matches your GPU VRAM:

    Q2_K  → ~13 GB file — 4GB VRAM (slow, ~3 hours on laptop)
    Q3_K_M → ~16 GB file — 6–8 GB VRAM
    Q4_K_M → ~20 GB file — 8–10 GB VRAM
    Q5_K_M → ~24 GB file — 12 GB VRAM (recommended for RTX 3060)
    Q8     → ~38 GB file — 16 GB+ VRAM (best quality)

    VAE flux2-vae.safetensors Download: https://huggingface.co/Comfy-Org/flux2-dev/resolve/main/split_files/vae/flux2-vae.safetensors

    Put it here: ComfyUI/models/vae/


    Text Encoder mistral_3_small_flux2_fp8.safetensors Download: https://huggingface.co/Comfy-Org/flux2-dev/resolve/main/split_files/text_encoders/mistral_3_small_flux2_fp8.safetensors

    Put it here: ComfyUI/models/text_encoders/

    Note: Use the fp8 version to save VRAM. Use bf16 if you have headroom and want slightly better quality.


    Required Custom Nodes Install via ComfyUI Manager or clone manually into ComfyUI/custom_nodes/

    ComfyUI-GGUF https://github.com/city96/ComfyUI-GGUF

    Description

    Workflows
    Flux.2 D

    Details

    Downloads
    40
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/31/2026
    Updated
    4/3/2026
    Deleted
    -

    Files

    flux2DevGGUFLowVRAM_v10.zip

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)