This is the official model page for
Chroma-DC-2K Safetensor and GGUF Quants
This model is a merge of v48-detail-calibrated and an experiment named 2k-test
ComfyUI workflow for the leftmost showcase image is in files marked as training data
The following is a copy of original model page.

Hey everyone,
A while back, I posted about Chroma, my work-in-progress, open-source foundational model. I got a ton of great feedback, and I'm excited to announce that the base model training is finally complete, and the whole family of models is now ready for you to use!
A quick refresher on the promise here: these are true base models.
I haven't done any aesthetic tuning or used post-training stuff like DPO. They are raw, powerful, and designed to be the perfect, neutral starting point for you to fine-tune. We did the heavy lifting so you don't have to.
And by heavy lifting, I mean about 105,000 H100 hours of compute. All that GPU time went into packing these models with a massive data distribution, which should make fine-tuning on top of them a breeze.
As promised, everything is fully Apache 2.0 licensed—no gatekeeping.
TL;DR:
Release branch:
Chroma1-Base: This is the core 512x512 model. It's a solid, all-around foundation for pretty much any creative project. You might want to use this one if you’re planning to fine-tune it for longer and then only train high res at the end of the epochs to make it converge faster.
Chroma1-HD: This is the high-res fine-tune of the Chroma1-Base at a 1024x1024 resolution. If you're looking to do a quick fine-tune or LoRA for high-res, this is your starting point.
Research Branch:
Chroma1-Flash: A fine-tuned version of the Chroma1-Base I made to find the best way to make these flow matching models faster. This is technically an experimental result to figure out how to train a fast model without utilizing any GAN-based training. The delta weights can be applied to any Chroma version to make it faster (just make sure to adjust the strength).
Chroma1-Radiance [WIP]: A radical tuned version of the Chroma1-Base where the model is now a pixel space model which technically should not suffer from the VAE compression artifacts.
Quantization options
Alternative option: FP8 Scaled Quant (Format used by ComfyUI with possible inference speed increase)
Alternative option: GGUF Quantized (You will need to install ComfyUI-GGUF custom node)
Special Thanks
A massive thank you to the supporters who make this project possible.
Anonymous donor whose incredible generosity funded the pretraining run and data collections. Your support has been transformative for open-source AI.
Fictional.ai for their fantastic support and for helping push the boundaries of open-source AI.
Support this project!
https://ko-fi.com/lodestonerock/
BTC address: bc1qahn97gm03csxeqs7f4avdwecahdj4mcp9dytnj
ETH address: 0x679C0C419E949d8f3515a255cE675A1c4D92A3d7
my discord: discord.gg/SQVcWVbqKx
Description
Safetensors version
FAQ
Comments (8)
File not found :/
Copypaste ¯\_(ツ)_/¯
awesome, so v1hd was not a real succes apparenty
Greetings!
Model is wonderfully drawing with main generation, but I got some problems in hiresfix and img2img mode: image become like it is painted by oil paint. This is weird, because all parameters are the same, and "base" image is very clean too.
Honestly, this is not a first flux / chroma model, where I see this effect, but this one is the best, so I decided to ask my question here.
I'm sorry for being annoying, I just can't figure it out =\
I'm sorry again
Could you please clarify:
What exactly is "an experiment named 2k-test"?
A lora? A finetune? A calibration?
And where can I find the 2k-test file itself without the merge?
I'm confused.
Where did this model of yours come from? "2K.SafeTensor" was not found in the author's model library. The HD and 2K2K test folders are all filled with ".pth "files
does the 2k test implies we can generate beyond 1024 x 1024?
Best NSFW realistic, Will post results soon
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.