This is an easy-to-use all-in-one checkpoint for Z-Image Turbo. It can do both SFW and NSFW. The demo images don't use anything else than this checkpoint. No LoRA, no upscaler.
NOTE: the GGUF Q8_0 version doesn't include the CLIP and VAE. You have to bring your own. It was made for those of you low on RAM. If possible, always use the fp16 version. Workflows included in the images.
Works best with my simple ComfyUI workflow GeZITx.
Sampler: Euler
Scheduler: beta
Steps: 6-10
CFG: 1.0
VAE: built-in (not included in GGUF Q8_0)
CLIP: built-in (not included in GGUF Q8_0)
Description
Version 1. Euler, beta, CFG 1.0, steps 6-10. VAE and CLIP built-in.
FAQ
Comments (33)
fp8 please
Will see, yes. I work on Mac and MPS is not really apt at fp8 natively.
@veridian78 I work on mac too. fp8 just for small size coz on my mobile internet impossible to download big files.
@Unicom Alright! I made a GGUF Q8_0 version. It doesn't include the CLIP nor the VAE. I won't be making a fp8 though. I had too many issues trying to get a decent quality with it.
厉害,算是zit加强版
Yeah... that's what I use instead of the base model. Try it with an upscaler, and it's near perfect 😊
I am surprised at the file-size. Why is it so big compared to other ZIT checkpoints?
因为集成了Clip和vae
It's an all-in-one containing the CLIP and the VAE. It's a real checkpoint, not just a model. Less setup to do 😊
Can you do one to give a 32Gig vRAM 5090 card a nice workout?
@pursuit_of_beauty I have 128 GB of unified memory on my Mac Studio. I can definitely make a bigger model (I actually use a much bigger one but didn't get around to publish it. It's not ready yet.)
WOW!!!! This is an amazing model! Thanks for the efforts...looking forward to future updates.
I'll try making a fp8 version of it.
by far the best Z Turbo for prompt adherence I've seen!
looks good but too big for my setup, please release its Q8 gguf version (not fp8)..
You can convert it yourself actually. Just look for the Python script to do it. Ask Claude for example.
@veridian78 It's not so simple, you have to have enough RAM to load the model to convert, but if the OP has 6GB the FP8 works.
@ferrrett33 I see. I have 128 GB RAM, so I'll try to convert it this week-end and upload it as an option.
@veridian78 yeah its ram issue for me, mine setup is 8gb ram laptop........ thanks for doing it.
@TaiLong Alright! I made a GGUF Q8_0 version. It doesn't include the CLIP nor the VAE.
@ferrrett33 i made an open source tool https://github.com/qskousen/ggufy that is makes it easy to convert models to GGUF and uses very little ram
@ferretduck And thanks for making it, it's exactly what I used!
为什么我想上传用这个模型生成的图片时,却无法在添加资源的时候找到这个模型?
hi, which vae and clip did you use? possible to not include those? or will that mess up the checkpoint?
It's the qwen3_4b CLIP and the regular ae.safetensors VAE. The idea of including them was to produce a checkpoint usable without fuss by even beginners so I'll keep them baked in.
@Melodic_Possible_582589 Alright! I made a GGUF Q8_0 version. It doesn't include the CLIP nor the VAE.
@veridian78 thank you for the GGUF version. I'm sorry I should've been more detailed about my regular usage. Usually I use fp16 checkpoints roughly around 12gb. Then I play around between the few clips and vae I find online. Is it possible to make a fp16 without the clip and vae? Thank You
@Melodic_Possible_582589 Unfortunately not, you'll need to extract it 🤷♂️
@veridian78 How should it be extracted?
@xiwsgg232 Easy: connect the Load Checkpoint node to a Save Model node in ConfyUI. Don't connect the VAE and the Clip, hit run and it will save just the model as a new checkpoint. I just don't want to maintain 3 different versions myself.
@veridian78 Well, this is a good approach, but it's still a bit inconvenient. I need to switch between different models anytime in the model loader, but with your method, I can only switch between different model loader nodes.
Does it could works with 4Gb vram ?
I will be honest, I don't know how that works. I have no idea how the VRAM <-> RAM works in Windows. I'm on a Mac, it uses unified memory. I have 128 GB that are split between RAM and VRAM. That's all I could say, sorry! Best way to know is to try and report back!










