CivArchive
    Preview 101004573
    Preview 101004575
    Preview 101004578
    Preview 101004579
    Preview 101004574

    I've made this model using the https://github.com/lum3on/ComfyUI-ModelQuantizer nodes and the full V48 version, so it could run better in my RX6800.

    I know that it is also needed if you want to run TorchCompile in RTX 3000 series, so here you have it.

    FAQ

    Comments (6)

    UnicomSep 18, 2025
    CivitAI

    What are the differences with this model https://civitai.com/models/1966367?modelVersionId=2225979

    Santodan
    Author
    Sep 19, 2025

    FP8 Scaled, is a closer to fp16 version, the e5m2 is more for people with RTX 3XXX and AMD card to be able to use torch compile

    mwcircle430Oct 3, 2025
    CivitAI

    Hi, may I ask which lora/model you used to make Chroma1.HD-Flash?

    Santodan
    Author
    Oct 3, 2025· 2 reactions

    I don't make the checkpoints, they are all coming from the official hugginface or civitai pages.
    I'm only converting them.
    I just noticed that I didn't included in the model, and it seems that the repo was deleted in hugginface

    mwcircle430Oct 3, 2025

    @BigDannyPt Thanks for replying. I was just wondering since it was the only one I couldn't find anywhere. But yeah, if it was removed, then that would make sense.

    UnstableStonerNov 14, 2025
    CivitAI

    great model, thank you. can't tell yet if lode's chromahd is better. i suggest trying FLUX.1 Turbo Alpha at 1.0 strength to accelerate generation.

    Checkpoint
    Chroma

    Details

    Downloads
    99
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/17/2025
    Updated
    5/3/2026
    Deleted
    -

    Files

    chromaFp8E5m2_dc2K.safetensors

    Mirrors

    Huggingface (1 mirrors)