ZIT - FP8 AIO & GGUF One-Click + Upscaler
Fast image generation using the Z-Image-Turbo model in FP8 quantized or GGUF formats. Includes integrated LoRA loading and high-quality upscaling.
How does it work?
This workflow is designed for high performance on limited VRAM. It allows you to load models directly and generate images adding LoRAs on the fly.
If any nodes appear in red, use the ComfyUI Manager to install missing dependencies automatically.
Required Models
Depending on your hardware, download the version that best fits your VRAM and place it in the corresponding folder:
FP8 Version: Place in \ComfyUI\models\checkpoints Download: https://huggingface.co/SeeSee21/Z-Image-Turbo-AIO/resolve/main/z-image-turbo-fp8-aio.safetensors?download=true
GGUF Version: Place in \ComfyUI\models\unet Download: https://huggingface.co/vantagewithai/Z-Image-Turbo-GGUF/tree/main
Note on Upscaling: If you don't want to upscale your image, simply bypass the ImageScaleToTotalPixels node before running the workflow.
Credits
Original workflow made by VantageWithAI and modified by RIO
Description
This is an edit of the original workflow, adapted to FP8