CivArchive
    Preview 62021631
    Preview 62021626
    Preview 62021623
    Preview 62021639
    Preview 62021635
    Preview 62021629
    Preview 62021630
    Preview 62021624
    Preview 62021632
    Preview 62021636
    Preview 62021638
    Preview 62021633
    Preview 62021634
    Preview 62021640
    Preview 62021642
    Preview 62021637
    Preview 62021641
    Preview 62021643


    Hey everyone,

    A while back, I posted about Chroma, my work-in-progress, open-source foundational model. I got a ton of great feedback, and I'm excited to announce that the base model training is finally complete, and the whole family of models is now ready for you to use!

    A quick refresher on the promise here: these are true base models.

    I haven't done any aesthetic tuning or used post-training stuff like DPO. They are raw, powerful, and designed to be the perfect, neutral starting point for you to fine-tune. We did the heavy lifting so you don't have to.

    And by heavy lifting, I mean about 105,000 H100 hours of compute. All that GPU time went into packing these models with a massive data distribution, which should make fine-tuning on top of them a breeze.

    As promised, everything is fully Apache 2.0 licensed—no gatekeeping.

    TL;DR:

    Release branch:

    • Chroma1-Base: This is the core 512x512 model. It's a solid, all-around foundation for pretty much any creative project. You might want to use this one if you’re planning to fine-tune it for longer and then only train high res at the end of the epochs to make it converge faster.

    • Chroma1-HD: This is the high-res fine-tune of the Chroma1-Base at a 1024x1024 resolution. If you're looking to do a quick fine-tune or LoRA for high-res, this is your starting point.

    Research Branch:

    • Chroma1-Flash: A fine-tuned version of the Chroma1-Base I made to find the best way to make these flow matching models faster. This is technically an experimental result to figure out how to train a fast model without utilizing any GAN-based training. The delta weights can be applied to any Chroma version to make it faster (just make sure to adjust the strength).

    • Chroma1-Radiance [WIP]: A radical tuned version of the Chroma1-Base where the model is now a pixel space model which technically should not suffer from the VAE compression artifacts.

    Quantization options

    Special Thanks

    A massive thank you to the supporters who make this project possible.

    • Anonymous donor whose incredible generosity funded the pretraining run and data collections. Your support has been transformative for open-source AI.

    • Fictional.ai for their fantastic support and for helping push the boundaries of open-source AI.

    Support this project!

    https://ko-fi.com/lodestonerock/

    BTC address: bc1qahn97gm03csxeqs7f4avdwecahdj4mcp9dytnj
    ETH address: 0x679C0C419E949d8f3515a255cE675A1c4D92A3d7

    my discord: discord.gg/SQVcWVbqKx

    Description

    Checkpoint
    Chroma

    Details

    Downloads
    600
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/7/2025
    Updated
    2/11/2026
    Deleted
    -

    Files

    chroma_011.safetensors

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)

    Available On (1 platform)