I quantized the latest versions of Lodestones Chroma for use in lower VRAM machines. Enjoy. For full versions https://huggingface.co/lodestones/Chroma/tree/main
For me it works best with Euler/Beta or DPM++2M/ SGM Uniform16-20 steps or Restart/SGM Uniform 6-10 steps. CFG 2 in Forge and 2.7 in Comfy with res_2s/bong_tangent. Download and use the Hyper_LowStep (str.1) or Chroma2Schnell (str. 0.125) LoRA from https://huggingface.co/silveroxides/Chroma-LoRA-Experiments/tree/main for fewer steps and faster generation times.
Description
Quantized Chroma v1.0-HD-Q4_K_S
FAQ
Comments (8)
it works on my old, old laptop (gtx1650) but for it to work properly I had to download VAE and text encoder from huggingface
It does require VAE and text encoder. Standard Flux ones work fine.
@milt68 I don't know, but when I first tried to generate an image (forgeUI), I got an error: text_enconder, so I downloaded: t5-v1_1-xxl-encoder-Q4_K_S.gguf. Then I got a VAE error, so I downloaded: ae.safetensors. Now it works perfectly
@analogOne You probably just needed to re-select them? Anyway, glad to hear. Enjoy!
what does this mean: Error(s) in loading state_dict for T5: size mismatch for shared.weight: copying a param with shape torch.Size([256384, 4096]) from checkpoint, the shape in current model is torch.Size([32128, 4096]). -- i used your example workflow and it worked once but i tried using it again and I get this error. I tried with two different versions of the text encoder, scaled and normal. edit: i got it to work using a third T5 text encoder and i have four. I just don't know how they're different so it seems random.
I tried to reproduce your error but I couldn't. Glad to hear it worked eventually.
@milt68 i don't blame the checkpoint, just that i had this error a lot of times trying to get chroma to work. if i learn why it happened i will reply here with the reason.
the link to the chroma2schnell model is dead. So, I thought I would share an alternative link.
Chroma-LoRA-Experiments · Models


