CivArchive
    Qwen-Image GGUF 3Q_K_M - 10G VRAM - VAE
    NSFW
    Preview 93009930

    This is a 3-bit quantized GGUF conversion of the Qwen/Qwen-Image model, released by City96 and mirrored here for convenience. The Q3_K_M variant is optimized for GPUs with at least 10GB VRAM,

    • Tested VRAM Usage:

      • Ubuntu, Firefox (8 tabs): ~8.8 - 9.2 GB VRAM (CFG 3, 20 Steps, uni_pc, normal, 5.4s/it)

      • Windows 11, Brave (1 tab), MiniConda, GGUF excluded from Windows Defender: ~9.6 GB VRAM

      Tips: Offload monitor tasks to an integrated GPU to free up VRAM. Runs smoothly on Linux or Windows

    Description

    VAE for Qwen Image

    FAQ

    Checkpoint
    Other

    Details

    Downloads
    352
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/8/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    qwenImageGGUF3QKM10G_vae.safetensors

    Mirrors

    HuggingFace (94 mirrors)
    ModelScope (1 mirrors)