CivArchive
    Z-ImageTurbo ❌ VISIONARY bf16/fp16/fp8 - ZIT-fp8_V0.1
    NSFW
    Preview 115712449
    Preview 115712450
    Preview 115712441
    Preview 115712436
    Preview 115712453
    Preview 115712858
    Preview 115712452
    Preview 115712434
    Preview 115712437
    Preview 115712456
    Preview 115712443
    Preview 115712455
    Preview 115712457
    Preview 115712440
    Preview 115712451
    Preview 115712445
    Preview 115712433
    Preview 115712454

    Release – Version 0.2 (Unsure which model for your GPU? See Rule of Thumb below.)
    What’s new?
    Since this is meant to become a semi-realism model, I pushed it further in that direction and added more details. I also intentionally switched to new showcase samplers because different seeds simply looked better in this version. A few images were replaced as well.
    (Feedback is highly appreciated!)

    Node:
    Because this is a checkpoint/LoRA merge (I only use LoRAs that I have trained myself), it can cause issues if you use an additional LoRA with a high epoch. Try starting with a LoRA strength of about 0.3 and increase it gradually from there.

    Advanced tip:
    In the ModelSamplingAuraFlow node, you can adjust the value between 3.00 and 3.10. This can help if you get images with weird hands or other repeated visual glitches.

    • bf16 Diffusion Model (fp8/fp16 coming soon, write me if u need it bad ^^)
    • No CLIP and no VAE included (ask me if you need help)
    • Recommended settings: CFG 1, 8 steps (max. 15)
    • Sampler: Euler A, Scheduler: Simple or Beta (Beta highly recommended)
    • Sample images are not upscaled and no Hi-Res Fix was used

    Original ComfyUI Models: Link (here you can find CLIP and VAE)


    First Release – Version 0.1
    This is my first Z-ImageTurbo aka checkpoint LoRA merge release, so it’s still an early version (V0.1).

    • bf16/fp8/fp16 Diffusion Model
    • No CLIP and no VAE included (Ask me if you need help with that.)
    • Recommended settings: CFG 1, 8 steps (max.15)
    • Sampler: Euler A, Scheduler: Simple or Beta (Beta highly recommended)
    • Sample images are not upscaled and no Hi-Res Fix was used

    Original ComfyUI Models: Link (here you can find CLIP and VAE)

    I’m still learning and improving, so future updates are planned. Feedback is highly appreciated!

    Rule of Thumb

    • NVIDIA Turing (RTX 20-series)
      → ❌ no real BF16 support, FP16 is the practical option
      Quality: usually fine, but a bit more fragile than newer formats

    • NVIDIA Ampere (RTX 30-series)
      → ✅ BF16 works well (problems? try to update your PyTorch/CUDA or use fp16)
      Quality: generally very close to FP32, little noticeable loss

    • NVIDIA Ada Lovelace (RTX 40-series)
      → ✅ BF16 stable, FP8 partly possible via software
      Quality: BF16 ~ FP32; FP8 can show noticeable quality drops depending on workload

    • NVIDIA Blackwell (RTX 50-series, e.g., 5090)
      → ✅ BF16 very solid, FP8 better supported but not magic
      Quality: FP8 is usable, but there is still some quality loss in many cases... not huge, but real

    • FP32: still needs to be released by Z-Image

    Note: You can load FP8 on almost any GPU and benefit from lower VRAM usage when loading, but on hardware without proper FP8 support it is automatically converted to FP16 or FP32 for computation. Because the original data is already quantized to FP8, this can introduce some quality loss, and there is no real FP8 compute speedup, only memory and data transfer benefits.

    Description

    Checkpoint
    ZImageTurbo

    Details

    Downloads
    97
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/30/2025
    Updated
    2/15/2026
    Deleted
    -

    Files

    zImageturboVISIONARY_zitFp8V01.safetensors