CivArchive
    Neta Lumina [TensorCoreFP8] - NTYM v3.5 mptc
    Preview 111849962

    This page contains fp8 quantized DiT models of Neta Lumina for ComfyUI.

    And a fp8 quantized Gemma 2 2b (the text encoder).

    All credit belongs to the original model author. License is the same as the original model.


    Update (11/27/2025): mixed precision and fp8 tensor core support (mptc).

    This is a new ComfyUI feature that supports fp8 tensor core, also with mixed precision.

    In short:

    Mixed precision: Keep important layers in BF16.

    FP8 tensor core support: On supported GPU, much faster (30~80%) than BF16 and classic FP8 scaled models. Because ComfyUI will do calculations in FP8 directly, instead of dequantizing + BF16. torch.compile is recommended.

    More info: https://civarchive.com/models/2172944/z-image-turbo-tensorcorefp8

    Description

    Checkpoint
    Lumina

    Details

    Downloads
    60
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/7/2025
    Updated
    12/10/2025
    Deleted
    -

    Files

    netaLumina_ntymV35Mptc.safetensors

    Mirrors