CivArchive
    Z-ImageTurbo ❌ VISIONARY bf16/fp16/fp8 - ZIT-bf16_V0.1
    NSFW
    Preview 115695077Preview 115695075Preview 115695068Preview 115695058Preview 115695070Preview 115695060Preview 115695062Preview 115695065Preview 115695057Preview 115695063Preview 115695071Preview 115695067Preview 115695061Preview 115695074Preview 115695066Preview 115695072Preview 115695069Preview 115695073Preview 115695076Preview 115695059

    Release – Version 0.2 (Unsure which model for your GPU? See Rule of Thumb below.)
    What’s new?
    Since this is meant to become a semi-realism model, I pushed it further in that direction and added more details. I also intentionally switched to new showcase samplers because different seeds simply looked better in this version. A few images were replaced as well.
    (Feedback is highly appreciated!)

    Node:
    Because this is a checkpoint/LoRA merge (I only use LoRAs that I have trained myself), it can cause issues if you use an additional LoRA with a high epoch. Try starting with a LoRA strength of about 0.3 and increase it gradually from there.

    Advanced tip:
    In the ModelSamplingAuraFlow node, you can adjust the value between 3.00 and 3.10. This can help if you get images with weird hands or other repeated visual glitches.

    • bf16 Diffusion Model (fp8/fp16 coming soon, write me if u need it bad ^^)
    • No CLIP and no VAE included (ask me if you need help)
    • Recommended settings: CFG 1, 8 steps (max. 15)
    • Sampler: Euler A, Scheduler: Simple or Beta (Beta highly recommended)
    • Sample images are not upscaled and no Hi-Res Fix was used

    Original ComfyUI Models: Link (here you can find CLIP and VAE)


    First Release – Version 0.1
    This is my first Z-ImageTurbo aka checkpoint LoRA merge release, so it’s still an early version (V0.1).

    • bf16/fp8/fp16 Diffusion Model
    • No CLIP and no VAE included (Ask me if you need help with that.)
    • Recommended settings: CFG 1, 8 steps (max.15)
    • Sampler: Euler A, Scheduler: Simple or Beta (Beta highly recommended)
    • Sample images are not upscaled and no Hi-Res Fix was used

    Original ComfyUI Models: Link (here you can find CLIP and VAE)

    I’m still learning and improving, so future updates are planned. Feedback is highly appreciated!

    Rule of Thumb

    • NVIDIA Turing (RTX 20-series)
      → ❌ no real BF16 support, FP16 is the practical option
      Quality: usually fine, but a bit more fragile than newer formats

    • NVIDIA Ampere (RTX 30-series)
      → ✅ BF16 works well (problems? try to update your PyTorch/CUDA or use fp16)
      Quality: generally very close to FP32, little noticeable loss

    • NVIDIA Ada Lovelace (RTX 40-series)
      → ✅ BF16 stable, FP8 partly possible via software
      Quality: BF16 ~ FP32; FP8 can show noticeable quality drops depending on workload

    • NVIDIA Blackwell (RTX 50-series, e.g., 5090)
      → ✅ BF16 very solid, FP8 better supported but not magic
      Quality: FP8 is usable, but there is still some quality loss in many cases... not huge, but real

    • FP32: still needs to be released by Z-Image

    Note: You can load FP8 on basically any GPU and benefit from lower VRAM when loading, but on hardware without proper FP8 support it is automatically converted to FP16/FP32 for computation. That means you don’t get real FP8 speedups, only the memory benefit.

    Description

    Checkpoint
    ZImageTurbo

    Details

    Downloads
    188
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/30/2025
    Updated
    1/2/2026
    Deleted
    -

    Files

    zImageturboVISIONARY_zitBf16V01.safetensors