CivArchive
    Preview 115750968
    Preview 115751125
    Preview 115751115
    Preview 115751132
    Preview 115751122
    Preview 115751118
    Preview 115751117
    Preview 115751119
    Preview 115751120
    Preview 115751126
    Preview 115751128
    Preview 115751121
    Preview 115751116
    Preview 115751124
    Preview 115751130
    Preview 115751131
    Preview 115751123
    Preview 115751127
    Preview 115751133
    Preview 115751270

    ๐Ÿš€ Z-Image AIO Collection

    โšก Base & Turbo โ€ข All-in-One โ€ข Bilingual Text โ€ข Qwen3-4B


    โš ๏ธ IMPORTANT: Requires ComfyUI v0.11.0+

    ๐Ÿ“ฅ Download ComfyUI


    โœจ What is Z-Image AIO?

    Z-Image AIO is an All-in-One repackage of Alibaba Tongyi Lab's 6B parameter image generation models.

    Everything integrated:

    • โœ… VAE already built-in

    • โœ… Qwen3-4B Text Encoder integrated

    • โœ… Just download and generate!


    ๐ŸŽฏ Available Versions


    ๐Ÿ”ฅ Z-Image-Turbo-AIO (8 Steps โ€ข CFG 1.0)

    Ultra-fast generation for production & daily use


    โšซ NVFP4-AIO (7.8 GB) ๐Ÿ†•

    ๐ŸŽฏ ONLY for NVIDIA Blackwell GPUs (RTX 50xx)!
    โšก Maximum speed optimized
    ๐Ÿ’พ Smallest file size
    ๐Ÿš€ FP4 precision - blazing fast
    

    Perfect for: RTX 5070, 5080, 5090 owners who want maximum speed


    โœ… Best balance of size & quality
    โœ… Works on 8GB VRAM
    โœ… Fast downloads
    โœ… Ideal for most users
    

    Perfect for: Daily use, testing, RTX 3060/4060/4070


    ๐Ÿ”ต FP16-AIO (20 GB)

    ๐Ÿ’พ Same file size as BF16
    ๐Ÿ”„ ComfyUI auto-casts to BF16 for compute
    โš ๏ธ Does NOT enable FP16 compute mode
    ๐Ÿ“ฆ Alternative download option
    

    Note: Z-Image does not support FP16 compute - activation values exceed FP16's max range, causing NaN/black images. Weights are cast to BF16 during inference regardless of file format.

    Perfect for: Alternative to BF16 download (identical inference behavior)


    โœ… BFloat16 full precision
    โœ… Absolute best quality
    โœ… Professional projects
    โœ… Also works on 8GB VRAM
    

    Perfect for: Professional work, maximum quality


    ๐ŸŽจ Z-Image-Base-AIO (28-50 Steps โ€ข CFG 3-5)

    Full creative control for pros & LoRA training


    ๐ŸŸก FP8-AIO (10 GB)

    โœ… Efficient for daily use
    โœ… Full CFG control
    โœ… Negative prompts supported
    โœ… 8GB VRAM compatible
    

    Perfect for: Daily work with full control


    ๐Ÿ”ต FP16-AIO (20 GB)

    ๐Ÿ’พ Same file size as BF16
    ๐Ÿ”„ ComfyUI auto-casts to BF16 for compute
    โš ๏ธ Does NOT enable FP16 compute mode
    ๐Ÿ“ฆ Alternative download option
    

    Note: See technical explanation in FAQ below.

    Perfect for: Alternative to BF16 download (identical inference behavior)


    โœ… Maximum quality
    โœ… Ideal for LoRA training
    โœ… Professional projects
    โœ… Highest precision
    

    Perfect for: LoRA training, professional work


    ๐Ÿ†š Turbo vs Base - When to Use?


    โšก Use TURBO when:

    โšก Speed is priority โ†’ 8 steps = 3-10 seconds
    ๐Ÿ“ธ Production workflows โ†’ Consistent high quality
    ๐Ÿ’พ Quick iterations โ†’ Rapid prototyping
    ๐ŸŽฏ Simple prompts โ†’ Less complex scenes
    

    ๐ŸŽจ Use BASE when:

    ๐ŸŽจ Creative exploration โ†’ Higher diversity
    ๐Ÿ”ง LoRA/ControlNet dev โ†’ Undistilled foundation
    ๐Ÿ“ Complex prompting โ†’ Full CFG control
    ๐Ÿšซ Negative prompts needed โ†’ Remove unwanted elements
    

    โš™๏ธ Recommended Settings


    โšก Turbo Settings (incl. NVFP4)

    ๐Ÿ“Š Steps: 8
    ๐ŸŽš๏ธ CFG: 1.0 (don't change!)
    ๐ŸŽฒ Sampler: res_multistep OR euler_ancestral
    ๐Ÿ“ˆ Scheduler: simple OR beta
    ๐Ÿ“ Resolution: 1920ร—1088 (recommended)
    ๐Ÿšซ Negative Prompt: โŒ Not used!
    

    ๐ŸŽจ Base Settings

    ๐Ÿ“Š Steps: 28-50
    ๐ŸŽš๏ธ CFG: 3.0-5.0 (start with 4.0)
    ๐ŸŽฒ Sampler: euler โญ OR dpmpp_2m
    ๐Ÿ“ˆ Scheduler: normal โญ OR karras
    ๐Ÿ“ Resolution: 512ร—512 to 2048ร—2048
    ๐Ÿšซ Negative Prompt: โœ… Fully supported!
    

    ๐Ÿ“Š Quick Overview


    Turbo Versions

    โšซ NVFP4  โ”‚ 7.8 GB  โ”‚ RTX 50xx only  โ”‚ Max Speed ๐Ÿ†•
    ๐ŸŸก FP8   โ”‚ 10 GB   โ”‚ 8GB VRAM       โ”‚ Recommended โญ
    ๐Ÿ”ต FP16  โ”‚ 20 GB   โ”‚ โ†’ BF16 compute โ”‚ See FAQ โš ๏ธ
    ๐ŸŒŸ BF16  โ”‚ 20 GB   โ”‚ 8GB VRAM       โ”‚ Max Quality โญ
    

    Base Versions

    ๐ŸŸก FP8   โ”‚ 10 GB   โ”‚ 8GB VRAM       โ”‚ Efficient
    ๐Ÿ”ต FP16  โ”‚ 20 GB   โ”‚ โ†’ BF16 compute โ”‚ See FAQ โš ๏ธ
    ๐ŸŒŸ BF16  โ”‚ 20 GB   โ”‚ 8GB VRAM       โ”‚ LoRA Training โญ
    

    ๐Ÿ’ก Prompting Guide


    โœ… Good Example:

    Professional food photography of artisan breakfast plate. 
    Golden poached eggs on sourdough toast, crispy bacon, fresh 
    avocado slices. Morning sunlight creating warm glow. Shallow 
    depth of field, magazine-quality presentation.
    

    โŒ Bad Example:

    breakfast, eggs, bacon, toast, food, morning, plate
    

    ๐Ÿ“ Tips

    DO:

    • โœ… Use natural language

    • โœ… Be detailed (100-300 words)

    • โœ… Describe lighting & mood

    • โœ… Specify camera angle

    • โœ… English OR Chinese (or both!)

    DON'T:

    • โŒ Tag-style prompts (tag1, tag2, tag3)

    • โŒ Very short prompts (under 50 words)

    • โŒ Negative prompts with Turbo


    ๐ŸŒ Bilingual Text Rendering


    English:

    Neon sign reading "OPEN 24/7" in bright blue letters 
    above entrance. Modern sans-serif font, glowing effect.
    

    ไธญๆ–‡:

    Traditional tea house entrance with sign reading 
    "ๅค้Ÿต่ŒถๅŠ" in elegant gold Chinese calligraphy.
    

    Both:

    Modern cafe with bilingual sign. "Morning Brew" in 
    white script above, "ๆ™จๆ›ฆๅ’–ๅ•ก" in Chinese below.
    

    ๐Ÿ“ฅ Installation


    Step 1: Download

    Choose your version based on:

    • GPU: RTX 50xx โ†’ NVFP4 possible

    • VRAM: 8GB โ†’ FP8 recommended

    • Purpose: LoRA Training โ†’ Base BF16


    Step 2: Place File

    ComfyUI/models/checkpoints/
    โ””โ”€โ”€ Z-Image-Turbo-FP8-AIO.safetensors
    

    Step 3: Load & Generate

    1. Open ComfyUI (v0.11.0+!)

    2. Use "Load Checkpoint" node

    3. Select your AIO version

    4. Generate!

    No separate VAE or Text Encoder needed!


    ๐Ÿ™ Credits


    Original Model

    ๐Ÿ‘จโ€๐Ÿ’ป Developer: Tongyi Lab (Alibaba Group)
    ๐Ÿ—๏ธ Architecture: Single-Stream DiT (6B parameters)
    ๐Ÿ“œ License: Apache 2.0
    

    ๐Ÿ”— Z-Image Base: https://huggingface.co/Tongyi-MAI/Z-Image

    ๐Ÿ”— Z-Image Turbo: https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

    ๐Ÿง  Text Encoder: https://huggingface.co/Qwen/Qwen3-4B


    ๐Ÿ“ˆ Version History


    v2.2 - FP16 Clarification

    ๐Ÿ“ Updated FP16 descriptions for technical accuracy
    โš ๏ธ Clarified: FP16 weights โ‰  FP16 compute
    ๐Ÿ”„ FP16 files are cast to BF16 during inference
    

    v2.1 - NVFP4 Release ๐Ÿ†•

    โž• Z-Image-Turbo-NVFP4-AIO (7.8GB)
    โšก Optimized for NVIDIA Blackwell (RTX 50xx)
    ๐Ÿš€ Maximum speed generation
    

    v2.0 - Base AIO Release

    โž• Z-Image-Base-BF16-AIO
    โž• Z-Image-Base-FP16-AIO
    โž• Z-Image-Base-FP8-AIO
    ๐Ÿ”„ ComfyUI v0.11.0+ support
    ๐Ÿ“ Qwen3-4B Text Encoder
    

    v1.1 - FP16 Added

    โž• Z-Image-Turbo-FP16-AIO
    ๐Ÿ”ง Wider GPU compatibility
    

    v1.0 - Initial Release

    โœ… Z-Image-Turbo-FP8-AIO
    โœ… Z-Image-Turbo-BF16-AIO
    โœ… Integrated VAE + Text Encoder
    

    โ“ FAQ


    Q: Which version should I choose?

    RTX 50xx + Speed โ†’ NVFP4 ๐Ÿ†•
    Most users       โ†’ Turbo FP8 โญ
    Full precision   โ†’ BF16 โญ
    LoRA Training    โ†’ Base BF16
    

    Q: Turbo or Base?

    Fast & simple    โ†’ Turbo โšก
    Full control     โ†’ Base ๐ŸŽจ
    

    Q: Will NVFP4 work on my RTX 4090?

    โŒ No! NVFP4 is only for RTX 50xx (Blackwell architecture).

    Use FP8 instead for RTX 40xx and older.


    Q: Do I need separate VAE/Text Encoder?

    โŒ No! Everything is already integrated.

    Just Load Checkpoint and go!


    Q: Works on 8GB VRAM?

    โœ… Yes! All versions work on 8GB VRAM.

    (NVFP4 requires RTX 50xx regardless of VRAM)


    โš ๏ธ Q: What about FP16 for older GPUs (RTX 2000/3000)?

    Important technical clarification:

    Z-Image does NOT support FP16 compute type. Here's why:

    ๐Ÿ“Š Technical reason:
    - FP16 max value: ~65,504
    - BF16 max value: ~3.39e+38 (same as FP32)
    - Z-Image's activation values exceed FP16's range
    - Result: Overflow โ†’ NaN โ†’ Black images
    

    What actually happens:

    • ComfyUI automatically casts weights to BF16 for computation

    • You can see this in logs: "model weight dtype X, manual cast: torch.bfloat16"

    • "Weight dtype" (file format) โ‰  "Compute dtype" (actual calculation)

    For RTX 20xx users (no native BF16):

    • BF16 is emulated via FP32 = slower but works

    • There is no way to run Z-Image in true FP16 compute

    • FP8 with CPU offload may be a better option for limited VRAM

    TL;DR: FP16 and BF16 files behave identically during inference. Choose based on download preference, not GPU compatibility.


    ๐Ÿš€ Get Started Now!

    Download โ†’ Load Checkpoint โ†’ Generate!

    Recommended versions:

    • ๐ŸŸก FP8 for most users (best size/quality balance)

    • ๐ŸŒŸ BF16 for maximum quality

    • โšซ NVFP4 for RTX 50xx speed

    All versions work on 8GB VRAM


    Happy generating! ๐ŸŽจ

    Description

    ! Update ! FP16 version released

    Why an FP16 version?

    FP16 is natively supported by most older GPUs โ€” especially NVIDIA GPUs below the 4000 series. This makes the FP16 version the best and most compatible choice for users running older hardware.

    NVIDIA GTX 1000 / RTX 2000 / RTX 3000 / RTX 4000

    The other two versions, BF16 and FP8, are recommended if you are using an NVIDIA 4000 series GPU or newer, as these architectures are optimized for those formats and can take better advantage of them.

    In short:

    • FP16 โ†’ best compatibility for older GPUs

    • BF16 / FP8 โ†’ best choice for NVIDIA 4000 series and newer

    This update simply gives everyone the option to use the version that fits their hardware best.

    FAQ

    Comments (22)

    MisccDec 31, 2025ยท 2 reactions
    CivitAI

    Thank you for the update! <3

    Unhearing3490274Jan 4, 2026ยท 3 reactions
    CivitAI

    One of the only AIOs I've tried - I am quite impressed. I will mess around with the FP8 for a bit.

    SeeSeeLP
    Author
    Jan 4, 2026ยท 1 reaction

    Thank you so much for the feedback :-)

    cynic2010Jan 10, 2026ยท 3 reactions
    CivitAI

    @SeeSeeLP

    the best AIO Pack PERIOD!!!!

    much love and keep it going ๐ŸŒžโค๏ธ๐Ÿ‘

    SeeSeeLP
    Author
    Jan 26, 2026ยท 1 reaction

    Thank you so much, cynic2010! ๐Ÿฅฐ
    Your kind words really made my day. Iโ€™m super happy youโ€™re enjoying the AIO pack โ€” much love back to you! ๐ŸŒžโค๏ธ

    kengdieJan 13, 2026ยท 5 reactions
    CivitAI

    ๅœจC็ซ™ไธญไฝ ็š„fp8ๆ˜ฏๆˆ‘่ฎคไธบ็Žฐๅœจๆœ€ๅฅฝ็š„๏ผŒ็ฎ€ๆด็š„้ข๏ผŒไบบ็‰ฉ่‚ขไฝ“่‰ฏๅฅฝใ€‚่ฐข่ฐขไฝ ๏ผ

    SeeSeeLP
    Author
    Jan 26, 2026

    ่ฐข่ฐขไฝ ๏ผๅพˆ้ซ˜ๅ…ดไฝ ๅ–œๆฌข FP8 ๆจกๅž‹๏ผŒๅธŒๆœ›ไฝ ็Žฉๅพ—ๅผ€ๅฟƒ๏ฝž ๐Ÿ˜„โœจ

    LuminalArtJan 24, 2026ยท 3 reactions
    CivitAI

    Anyone else getting a BSOD when trying to run any of these? Using a 4060

    SeeSeeLP
    Author
    Jan 26, 2026

    Hi @TeKniKo Iโ€™m actually using an RTX 4060 myself and havenโ€™t run into any BSOD issues. Itโ€™s likely something with your settings. You might want to check temperatures, your power supply, or try running a fresh ComfyUI installation in a separate folder to see if that helps.

    LuminalArtJan 26, 2026

    @SeeSeeLPย I figured out the issue. I was using base ForgeUI which doesnt support ZIT. I had to use a different branch. Now Im addicted to the quality lol.

    SeeSeeLP
    Author
    Jan 26, 2026ยท 1 reaction

    @TeKniKoย Nice, glad you figured it out! ๐Ÿ˜„
    Yeah, base ForgeUI can be a bit tricky with ZIT support. Happy to hear itโ€™s working now โ€” and welcome to the addiction, the quality really hits hard ๐Ÿ˜ˆ๐Ÿ”ฅ
    Have fun generating!

    rivdemon1221554Jan 26, 2026ยท 3 reactions
    CivitAI

    Does the Qwen 8B text encoder work with z-image, or only the smaller one?

    SeeSeeLP
    Author
    Jan 26, 2026

    Hi @rivdemon1221554 , I havenโ€™t tested this myself, but I donโ€™t think it will work. Z-Image uses Qwen3-4B as the text encoder, not the Qwen3-8B version, so the 8B encoder is likely not supported there. That said, this is just my assumption since I havenโ€™t tested it directly.

    rivdemon1221554Jan 26, 2026

    @SeeSeeLPย Yeah, doesn't for me. It's a shame, Z-image is fast but it just doesn't understand the level of detail that FLUX Klein 9b and especially Qwen 2512. I get people hate slow stuff but I've tested multiple character images with all 3 and I see now, there's definitely a reason Z-image is faster and cheaper.

    SeeSeeLP
    Author
    Jan 28, 2026ยท 6 reactions
    CivitAI

    !!!Update!!!

    The AIO Base version is here!

    To Do:

    Upload more FP8/FP16/BF16 images

    Upload BF16 and FP16 versions

    Workflow for the base versions will be uploaded soon

    SeeSeeLP
    Author
    Jan 28, 2026

    I haven't tested everything extensively yet, but all the AIO versions are running stably.

    lonecatone23Jan 28, 2026ยท 1 reaction

    So this is a checkpoint? I've got a workflow for that. Be advised it wil give you "weight" errors when it tries to load the model for detailers (assuming you aren't using this to detail)

    x648Jan 28, 2026ยท 1 reaction

    Waiting online and looking forward to the BF16 base-aio๐Ÿ˜€

    alucardnoir941Jan 28, 2026ยท 1 reaction

    thank you. these AIO are really useful for stability matrix.

    syphonfilterargAIJan 28, 2026ยท 5 reactions
    CivitAI

    Hi! You can raise it to Tensor ๐Ÿ˜

    SeeSeeLP
    Author
    Jan 28, 2026

    Yes, that's also on my agenda.๐Ÿ˜‰