I've made this model using the https://github.com/lum3on/ComfyUI-ModelQuantizer nodes and the full V48 version, so it could run better in my RX6800.
I know that it is also needed if you want to run TorchCompile in RTX 3000 series, so here you have it.
Description
Conversion of https://civitai.com/models/1956921/chroma-dc-2k
FAQ
Comments (6)
What are the differences with this model https://civitai.com/models/1966367?modelVersionId=2225979
FP8 Scaled, is a closer to fp16 version, the e5m2 is more for people with RTX 3XXX and AMD card to be able to use torch compile
Hi, may I ask which lora/model you used to make Chroma1.HD-Flash?
I don't make the checkpoints, they are all coming from the official hugginface or civitai pages.
I'm only converting them.
I just noticed that I didn't included in the model, and it seems that the repo was deleted in hugginface
@BigDannyPt Thanks for replying. I was just wondering since it was the only one I couldn't find anywhere. But yeah, if it was removed, then that would make sense.
great model, thank you. can't tell yet if lode's chromahd is better. i suggest trying FLUX.1 Turbo Alpha at 1.0 strength to accelerate generation.




