CivArchive
    Z-Image Base + Turbo Segmented Rendering Workflow - v1.0
    Preview 130175807

    This ComfyUI workflow is designed for segmented rendering with Z-Image Base and Z-Image Turbo. Instead of using only one model for the entire generation process, this workflow splits the sampling process into different stages and lets Z-Image Base and Z-Image Turbo handle different parts of the render. The goal is to combine the stronger global structure and composition ability of Z-Image Base with the faster, sharper, and more detail-oriented finishing behavior of Z-Image Turbo.

    The core idea is simple: use Z-Image Base to build the main image foundation during the earlier high-noise stage, then hand the latent result to Z-Image Turbo for the later low-noise stage. This makes the workflow useful when a single-model workflow is not stable enough, or when Turbo alone is fast but not always strong enough for complex composition, and Base alone is more stable but slower or less efficient for final iteration. By separating the render into stages, the workflow gives creators more control over composition, detail, speed, and final polish.

    The workflow is built around two Z-Image models. The first model route uses z_image_bf16.safetensors as the Base model. This route is responsible for the main structure, subject placement, scene logic, atmosphere, and broad visual composition. The second model route uses z_image_turbo_bf16.safetensors as the Turbo model. This route is used for continuation, refinement, and detail strengthening after the Base model has already established the image direction.

    The workflow uses qwen_3_4b.safetensors as the text encoder and ae.safetensors as the VAE. The prompt is encoded through CLIPTextEncode, then passed into CFGGuider. The sampling process is handled through RandomNoise, BasicScheduler, SplitSigmas, DetailDaemonSamplerNode, SamplerEulerAncestral, and SamplerCustomAdvanced. This structure gives the workflow a more technical and controllable sampling chain than a normal KSampler-only setup.

    A key part of this workflow is SplitSigmas. The workflow generates a sigma schedule, then splits it into a high-sigma section and a low-sigma section. The high-sigma section represents the earlier generation stage, where the model is still deciding major image structure and composition. The low-sigma section represents the later refinement stage, where the image is already formed and the model mainly improves detail, texture, edge quality, lighting, and surface finish.

    In this workflow, Z-Image Base handles the earlier stage. This is useful because the Base model is better suited for establishing the image’s overall logic. It can help create more stable subject placement, stronger scene coherence, better spatial layout, and a more complete starting image. For complex prompts, surreal scenes, character-based illustrations, fashion photography concepts, product-like compositions, or cinematic visuals, this Base stage acts like the foundation pass.

    After the Base stage, the workflow passes the denoised latent into the Turbo stage. Z-Image Turbo then works on the later stage using the low-sigma portion. This stage can improve the final visual quality without fully rebuilding the image from zero. It can help sharpen local texture, strengthen lighting, improve object edges, add surface detail, and make the result cleaner and more finished. This is the part that makes the workflow feel like a segmented render rather than a normal one-pass generation.

    The workflow also uses DetailDaemonSamplerNode in the sampling chain. DetailDaemon is useful for controlling detail emphasis during generation. In the Base stage, it can help the model produce richer structure and stronger mid-frequency details. In the Turbo stage, it can help refine texture and micro-details without requiring a full redraw. This makes the workflow useful for images that need a more polished and high-density look.

    The first stage uses Z-Image Base with a stronger guidance value and a higher detail setting. This helps the image form with enough visual weight. The workflow also uses FlowMatchEulerDiscreteScheduler and Euler Ancestral-style sampling, which gives the render a more controlled progressive structure. The Base stage produces a denoised latent output that is then reused instead of being discarded.

    The second stage uses Z-Image Turbo with its own CFGGuider and DetailDaemon settings. This stage is more focused on speed and detail. Because the latent already contains the major image structure, the Turbo model does not need to solve the entire composition from noise. Instead, it can concentrate on making the result sharper, cleaner, and more visually complete.

    The workflow also includes an optional image scaling and second refinement section. After decoding the intermediate output, the workflow uses ImageScaleBy with a 1.5x scale setting. This allows the generated result to be enlarged and then refined again. This is useful when the image looks good but needs more resolution, cleaner detail, and stronger final output quality. The scaled image can be encoded back into latent space and passed through another refinement stage.

    This makes the workflow suitable not only for normal image generation, but also for high-quality render finishing. It can be used when creators want an image that feels more complete than a fast Turbo output, but faster and more flexible than relying only on Base for every stage. It is especially useful for testing how Base and Turbo behave when they are chained together in one controlled render pipeline.

    The workflow also includes preview and save nodes so users can inspect both intermediate and final outputs. This is important for segmented rendering because each stage may produce a different visual result. Users can compare the Base-stage result, the Turbo-refined result, and the upscaled or final refined result to decide which one is best for publishing.

    This workflow is useful for creators who want to explore model cooperation rather than simple model switching. Base and Turbo do not need to be treated as separate isolated workflows. In this graph, they are used as different rendering engines inside one pipeline. Base focuses on early structure. Turbo focuses on later refinement. The split-sigma design makes the handoff more intentional and more controllable.

    Main features:

    - Z-Image Base + Z-Image Turbo segmented rendering workflow

    - Uses z_image_bf16.safetensors for the Base stage

    - Uses z_image_turbo_bf16.safetensors for the Turbo stage

    - Qwen 3 4B text encoder support

    - AE VAE support

    - SplitSigmas for high-noise and low-noise stage separation

    - Base model used for global structure and composition

    - Turbo model used for continuation and final detail refinement

    - SamplerCustomAdvanced multi-stage sampling

    - CFGGuider control for each model stage

    - RandomNoise seed control

    - FlowMatchEulerDiscreteScheduler support

    - SamplerEulerAncestral route

    - DetailDaemonSamplerNode for detail control

    - Optional 1.5x upscale and second refinement pass

    - Preview and SaveImage output for stage comparison

    Recommended use cases:

    Z-Image Base and Turbo comparison, segmented image rendering, high-quality text-to-image generation, complex composition testing, fantasy illustration, surreal concept art, cinematic scene generation, character image creation, fashion visual generation, product-style render testing, high-detail artwork finishing, prompt research, Base-to-Turbo pipeline testing, and Civitai workflow showcase examples.

    Suggested workflow:

    Start by writing a clear and detailed prompt. Because the Base stage is responsible for the main image structure, the prompt should describe the subject, scene, composition, lighting, color palette, atmosphere, and style clearly. Avoid writing only short vague prompts if you want to test the strength of the segmented pipeline.

    Use the Base stage to establish the image foundation. This stage is where the image decides the main structure, subject position, background layout, and visual direction. If the Base result already has weak composition, the Turbo stage may improve details but will not always fix the whole image logic. Therefore, tune the prompt and seed until the Base stage gives a good foundation.

    Use the SplitSigmas setting to control where the handoff happens. If the split happens too early, Turbo may take over before the image structure is stable. If the split happens too late, Base does most of the work and Turbo only has limited influence. The included split value gives a practical starting point, but users can adjust it depending on whether they want stronger Base structure or stronger Turbo finishing.

    Use the Turbo stage for final polish. This stage is useful for sharpening details, improving local texture, strengthening edges, and making the image feel more complete. If the Turbo output changes the image too much, lower the influence or adjust the low-sigma section. If the Turbo output is too weak, increase its detail settings or allow it more room in the later sampling stage.

    Use the DetailDaemon settings carefully. Higher detail values can create a richer image, but too much detail may introduce noise, harsh texture, or over-rendered surfaces. For clean fashion photography or product-style images, keep detail moderate. For fantasy, surreal, or illustrative images, stronger detail can create a more dramatic result.

    Use the optional upscale and refinement stage when the generated image is good but needs more resolution. The workflow includes a 1.5x image scaling section, followed by another latent refinement stage. This is useful for generating a stronger final image for Civitai examples, social media covers, thumbnails, posters, or RunningHub showcases.

    If you are testing speed, run only the Base + Turbo segmented generation first. If the result is good, enable or continue with the upscale/refinement section. This saves time during prompt testing and avoids wasting resources on high-resolution refinement before the core image is stable.

    When evaluating the results, do not only look at sharpness. Check whether the image follows the prompt, whether the composition is stable, whether the subject is coherent, whether textures are improved, and whether the Turbo stage preserved the Base structure. The best result should keep the Base model’s structural strength while gaining Turbo’s cleaner final detail.

    For character images, check facial stability, clothing detail, hands, body structure, and background consistency. For scene images, check depth, atmosphere, perspective, and object placement. For product or poster-style images, check whether the final output looks clean, controlled, and ready for publishing.

    This workflow is designed for creators who want a more advanced Z-Image rendering strategy inside ComfyUI. It is not just a simple Base workflow or a simple Turbo workflow. It uses staged sampling, sigma splitting, model handoff, detail control, and optional final refinement to create a more flexible image-generation pipeline. It is especially useful for users who want to study how Z-Image Base and Z-Image Turbo can cooperate in one workflow rather than being used separately.

    🎥 YouTube Video Tutorial

    Want to know what this workflow actually does and how to start fast?

    This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.

    Everything starts directly on RunningHub, so you can experience it in action first.

    👉 YouTube Tutorial: https://youtu.be/Y6L5qkA8ZYs

    Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.

    ⚙️ RunningHub Workflow

    Try the workflow online right now — no installation required.

    👉 Workflow: https://www.runninghub.ai/post/2017271994244403202/?inviteCode=rh-v1111

    If the results meet your expectations, you can later deploy it locally for customization.

    🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!

    📺 Bilibili Updates (Mainland China & Asia-Pacific)

    If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.

    📺 Bilibili Video: https://www.bilibili.com/video/BV1DQ61B1Eix/

    ☕ Support Me on Ko-fi

    If you find my content helpful and want to support future creations, you can buy me a coffee ☕.

    Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.

    👉 Ko-fi: https://ko-fi.com/aiksk

    💼 Business Contact

    For collaboration or inquiries, please contact aiksk95 on WeChat.

    🎥 YouTube 视频教程

    想了解这个工作流到底是怎样的工具,以及如何快速启动?

    视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。

    我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。

    👉 YouTube 教程: https://youtu.be/Y6L5qkA8ZYs

    开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。

    ⚙️ 在线体验工作流

    现在就可以在线体验,无需安装。

    👉 工作流: https://www.runninghub.ai/post/2017271994244403202/?inviteCode=rh-v1111

    打开上方链接即可直接运行该工作流,实时查看生成效果。

    如果觉得效果理想,你也可以在本地进行自定义部署。

    🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

    📺 Bilibili 更新(中国大陆及南亚太地区)

    如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

    📺 B站视频: https://www.bilibili.com/video/BV1DQ61B1Eix/

    我会在 夸克网盘 持续更新模型资源:

    👉 https://pan.quark.cn/s/20c6f6f8d87b

    这些资源主要面向本地用户,方便进行创作与学习。

    Description

    Workflows
    ZImageTurbo

    Details

    Downloads
    71
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/9/2026
    Updated
    5/14/2026
    Deleted
    -

    Files

    zImageBaseTurbo_v10.zip

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)