I quantized the latest versions of Lodestones Chroma for use in lower VRAM machines. Enjoy. For full versions https://huggingface.co/lodestones/Chroma/tree/main
For me it works best with Euler/Beta or DPM++2M/ SGM Uniform16-20 steps or Restart/SGM Uniform 6-10 steps. CFG 2 in Forge and 2.7 in Comfy with res_2s/bong_tangent. Download and use the Hyper_LowStep (str.1) or Chroma2Schnell (str. 0.125) LoRA from https://huggingface.co/silveroxides/Chroma-LoRA-Experiments/tree/main for fewer steps and faster generation times.
Description
Latest Chroma unlocked & detail calibrated, quantized Q4_K_S. Excels in: lighting, human/animal anatomy, art styles, image clarity without having to use LoRA. In order to use update Forge to the latest version or update Comfy and use an appropriate Chroma workflow. Also needs t5 and VAE--no clipL. Flan t5 and Flux VAE work the best in my experience.
FAQ
Comments (1)
Another thread is better...
If LORA was trained with Text Encoder training activated then this LORA most likely will show significantly different results even inside the same family of checkpoints.
For example, LORAs which are modifying the contrast, brightness, colors, sharpness are working the same way for SDXL / Pony / ILL / NAI.
But my LORAs for rendering Low-Poly designs with Voronoi pattern for SDXL and PONY work differently.
So, I can expect that LORAs made for Chroma will not work as expected for Flux 1.S. And also, I can expect the Chroma LORA will work stronger for the original Chroma checkpoint than for other Chroma checkpoint which wasn't derived from the former checkpoint... ufff hope you will get my point.

