Merged the files from https://huggingface.co/rocca/chroma-nunchaku-test/tree/main/v38-detail-calibrated-32steps-cfg4.5-1024px into one .safetensors file for the people who have trouble loading it. Save in diffusion_models folder. Thanks to Joe Rocca for the quantization. Enjoy!
Description
Merged model files from https://huggingface.co/rocca/chroma-nunchaku-test/tree/main/v38-detail-calibrated-32steps-cfg4.5-1024px into a single file. Thanks to Joe Rocca for the quantization.
FAQ
Comments (29)
comfy says it cant detect the unet loader.
You have to use specific Nunchaku loaders. If you show me what WF you're using I may be able to give a hand.
@milt68 just the basic chroma one.
@Starry_Eyes OK, has to be a WF that uses Nunchaku loaders; try using the current version of the one I posted in suggested resources that works out of the box if you have everything installed.
@Starry_Eyes Alternatively you can replace manually your loaders with Nunchaku ones (Nunchaku DIT Flux Loader and Nunchaku Flux LoRA Loaders) in your simple WF.
@milt68 is there any guide how to do that?
@Starry_Eyes It's pretty simple really, you double click on an empty spot in your WF and search for the loader names I wrote above, select them one at a time and then carefully connect them replacing the simple loaders.
Wish he'd do it for the most recent release
Woohoo! This works great on my 4060ti. The biggest benefit is that it handles all the Lora that I have tested in contrast with the regular Chroma. I have seen a few nunchaku derivatives published in int4 only format. If the reason that it is int4 only is that you don't have a 50xx gpu to test it on I would be thrilled to be your test bed using my 5090.
Good to hear, Optimum. It works great on my 3060 also. This is an int4 (svdq-int4_r32-chroma_v38.safetensors) but the filename somehow got altered when I uploaded it. In short, it shouldn't work on Blackwell cards but feel free to test.
@milt68 thanks, I did test. it works on my 4060 but not on my 5090. And I'm confused. i think you mean it is int4. fp4 would work in the opposite way - it would work on the 5090 but not on the 4060
@Optimum Yep exactly mixed them up
@milt68 That is sort of a relief. I will use this on the 4060 but I am trying to find a way to create fp4 versions for blackwell.
@Optimum Apparently you have to use a tool called deep compressor .
How did you get it to work? it doesnt work for me
@gwshdude If you tell us a bit about your configuration and what workflow you use someone may be a able to help.
@gwshdude If you have a working nuncahku workflow it will just load as a nuncahku model. However, it will not work with 50xx family GPUs.
@milt68 idk what to say. I loaded the recommended workflow and tried replacing the specified node and loading the model. It doesn't work.
@gwshdude idk what to say either :-) . Which workflow was recommended? If it is a nunchaku workflow there are no nodes to replace - just choose this model. what is your GPU?
Nice, man! Works perfectly fine. Any plans for most recent Chroma version or not realy?
Dude, that's great! It has just become my main t2i! I just wish there was a version with the new chroma..!
Thanks! Same here. Quantizing for nunchaku sounds pretty complicated and I'm not sure I can find the time to attempt it.
nunchaku dit loader won't load this model on my 5070ti :( can you make fp4 version?
Sorry buddy. Nunchaku quantization is very tricky because it's not natively supported from what I understand for Chroma (yet?). Joe Rocca hacked (nunchaku's tool) deep compressor's code to quantize Chroma v38 but unfortunately made only the int4 version. Then I merged the files into a .safetensors to make this easier to use. Until something changes and deep compressor supports Chroma models there's not much that can be done.
I tried using this model with WebUI Forge - Neo, but getting error: AttributeError: 'NoneType' object has no attribute 'is_webui_legacy_model'
Nope, Nunchaku-quantized models don't work in Forge. You need a ComfyUI installation with Nunchaku for these. However, you can use regular or gguf quantized Chroma models in Forge
@milt68 Yeah, i know that. But Forge - Neo (https://github.com/Haoming02/sd-webui-forge-classic/tree/neo) it's a fork that supports Nunchaku-quantized models, i tested it with Nunchaku NVFP4 version of the Flux. On github page it says: "Support Nunchaku (SVDQ) Models", but maybe i don't understand something...
@NekoYaSan I didn't know a new fork had come out that supports Nunchaku, that's great. I may do a bit of testing myself in the next few days.
@milt68 It would be really cool if you could figure out this error. I think, that this merged model files do not fit the Forge - Neo structure. But i'm noob at this, so idk ¯\_(ツ)_/¯








