CivArchive
    Preview 101023471
    Preview 101023465
    Preview 101023464
    Preview 101023473
    Preview 101023469
    Preview 101023466
    Preview 101023470
    Preview 101023467

    I've made this model using the https://github.com/lum3on/ComfyUI-ModelQuantizer nodes and the full V48 version, so it could run better in my RX6800.

    I know that it is also needed if you want to run TorchCompile in RTX 3000 series, so here you have it.

    Description

    Conversion directly from https://civitai.com/models/1966367

    FAQ

    Comments (6)

    UnicomSep 18, 2025
    CivitAI

    What are the differences with this model https://civitai.com/models/1966367?modelVersionId=2225979

    Santodan
    Author
    Sep 19, 2025

    FP8 Scaled, is a closer to fp16 version, the e5m2 is more for people with RTX 3XXX and AMD card to be able to use torch compile

    mwcircle430Oct 3, 2025
    CivitAI

    Hi, may I ask which lora/model you used to make Chroma1.HD-Flash?

    Santodan
    Author
    Oct 3, 2025· 2 reactions

    I don't make the checkpoints, they are all coming from the official hugginface or civitai pages.
    I'm only converting them.
    I just noticed that I didn't included in the model, and it seems that the repo was deleted in hugginface

    mwcircle430Oct 3, 2025

    @BigDannyPt Thanks for replying. I was just wondering since it was the only one I couldn't find anywhere. But yeah, if it was removed, then that would make sense.

    UnstableStonerNov 14, 2025
    CivitAI

    great model, thank you. can't tell yet if lode's chromahd is better. i suggest trying FLUX.1 Turbo Alpha at 1.0 strength to accelerate generation.

    Checkpoint
    Chroma

    Details

    Downloads
    287
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/18/2025
    Updated
    5/3/2026
    Deleted
    -

    Files

    chromaFp8E5m2_2kQC.safetensors

    Mirrors

    Huggingface (1 mirrors)