fp8 quantized Newbie-image for ComfyUI.
All credit belongs to the original model author. License is the same as the original model.
TensorCoreFP8 (tcfp8):
Scaled fp8 + Mixed precision + Hardware fp8 support
On supported GPU, ComfyUI will automatically do calculations in FP8 directly, instead of dequantizing + BF16. torch.compile is recommended, if you can get it up and running.
More info about hw fp8: https://civarchive.com/models/2172944
Old:
Scaled fp8 + Mixed precision. Does not support hardware fp8 (no calibration data).
Gemma 3 4b:
Scaled fp8 + Mixed precision.
Jina clip (TBD):
Jina clip is very small, seems not necessary.
Description
Scaled fp8 + mixed precision.
Details
Downloads
90
Platform
CivitAI
Platform Status
Available
Created
12/16/2025
Updated
12/27/2025
Deleted
-
Files
newbieImageFp8_gemma34b.safetensors
Mirrors
CivitAI (1 mirrors)
