CivArchive
    Preview 104666334
    Preview 104676625

    This page contains scaled fp8 quantized DiT models of Neta Lumina for ComfyUI.

    And a scaled fp8 quantized Gemma 2 2b (the text encoder).

    All credit belongs to the original model author. License is the same as the original model.

    Note: Images from bf16 and fp8 models are identical, like this. If image from fp8 model changed drastically, your ComfyUI somehow enabled fp16 mode. Lumina 2 doesn't not support fp16, and you will get deformed image.


    Update (11/27/2025): mixed precision and fp8 tensor core support (mptc).

    This is a new ComfyUI feature that supports fp8 tensor core, also with scaled fp8 + mixed precision.

    In short:

    Mixed precision: Keep important layers in BF16.

    FP8 tensor core support: On supported GPU, much faster (30~80%) than BF16 and classic FP8 scaled models. Because ComfyUI will do calculations in FP8 directly, instead of dequantizing + BF16. torch.compile is recommended.

    More info: https://civarchive.com/models/2172944/z-image-turbo-tensorcorefp8

    Description

    This is the DiT model.

    FAQ

    Comments (7)

    krigetaOct 8, 2025
    CivitAI

    Hey, as you are an expert with Lumina, can you tell me for anime lora training which one is good? Illustrious wai or this one? as I guess there are no controlnets for this.

    reakaakasky
    Author
    Oct 9, 2025

    depends how you define "good".

    if good means easy to train and to use, then illustrious (the original model, not wai, generally not recommend training a lora on a finetuned epically "merged" model).

    Lumina, from model perspective, is definitely better than SDXL. But is not popular and lack of third party tools. too many new and big models nowadays.

    krigetaOct 9, 2025

    @reakaakasky Indeed but the future is great as Lumina has a better DIT architecture, right?

    reakaakasky
    Author
    Oct 9, 2025· 2 reactions

    @krigeta I would say ... no. TBH I think finetuning anime model is dead. Because you no longer have to.

    Just give a model serval character images as refence, the model can generate the character style you want. Like Google banana, Qwen Edit.

    reakaakasky
    Author
    Oct 9, 2025· 1 reaction

    I won't be surprised if Qwen image releasing a mini version tomorrow with less than 5B params and MoE architecture and run super fast, with built-in editing, controlnet, diffusers training tool supports.....

    I mean they always release multiple versions with different sizes, e.g. 5B Wan video, 0.6B Qwen LLM ..., where is the "mini" version of Qwen Image...

    krigetaOct 9, 2025

    @reakaakasky damn! this sounds to good to be True but indeed it would be amazing.

    krigetaOct 9, 2025

    and yeah I am using Qwen edit 2509 but it always blurs out the anime image and struggle to pose them in the openpose I want. tried a lot of custom workflows but the results are meh.

    Checkpoint
    Lumina

    Details

    Downloads
    83
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/7/2025
    Updated
    5/15/2026
    Deleted
    -

    Files

    netaLuminaFp8_ntymV3.safetensors

    Mirrors