This workflow is designed for Z-Image Turbo 2602 ControlNet generation with a high-noise / low-noise two-stage rendering structure. Compared with a normal one-pass ControlNet workflow, this version gives creators more control over how the image is formed, refined, and polished. The main goal is to use a reference image as a structural guide, let Z-Image Turbo build the main composition in the early high-noise stage, then use a second low-noise pass to improve detail, texture, and final visual stability.
The workflow uses Z-Image Turbo as the main generation backbone, with qwen_3_4b as the text encoder and ae.safetensors as the VAE. It also uses the Z-Image Fun Distill 4-step 2602 LoRA route, making the pipeline faster and more suitable for online generation. The ControlNet section is driven by a reference image, which is processed and converted into a control signal. This gives the model a stronger structural anchor for pose, layout, silhouette, and composition.
The most important part of this workflow is the two-stage sampling logic. A BasicScheduler creates the sigma schedule, then SplitSigmas divides the process into high-sigma and low-sigma sections. The high-noise stage is used first. This is where the image is still forming from noise, so it has the strongest influence on global structure, subject placement, rough lighting, and composition. The ControlNet guidance is especially important here because it helps prevent the output from drifting away from the reference image.
After the first stage, the workflow continues into a low-noise refinement stage instead of stopping immediately. The second sampler receives the already-formed latent and focuses more on polishing details rather than rebuilding the whole image. This helps improve surface texture, edge quality, local detail density, and final image clarity. The workflow also uses DetailDaemonSamplerNode in the sampling chain, allowing the creator to push more detail into the render while still controlling where and how that detail is applied.
This structure is useful because Z-Image Turbo is fast, but fast generation can sometimes feel too direct or under-refined. By splitting the process into two stages, the workflow keeps Turbo speed while giving the image more room to stabilize. The first stage creates the controlled foundation, and the second stage strengthens the final result. This makes the workflow suitable for character pose control, fashion photography concepts, anime-style portraits, fantasy characters, product-style visual tests, Civitai previews, RunningHub demos, and fast controlled image generation.
The workflow also includes a final enhancement route. After the main result is decoded, the image can be scaled up with a 1.5x Lanczos pass, encoded back into latent space, and refined again with a low-denoise pass. This makes the workflow more complete than a basic ControlNet test, because it can produce a cleaner final output suitable for publishing.
In short, this is a Z-Image Turbo 2602 ControlNet workflow built around controlled structure, high-noise composition building, low-noise detail refinement, and optional final upscale polishing. If you want to see how the ControlNet image, SplitSigmas, DetailDaemon, and two-stage sampling route are connected, watch the full tutorial from the YouTube link above.
⚙️ Try the Workflow Online
👉 Workflow: https://www.runninghub.ai/post/2029507218332196866/?inviteCode=rh-v1111
Open the link above to run the workflow directly online and view the generation results in real time.
If the results meet your expectations, you can also deploy it locally for further customization.
🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV141Pkz4E95/
I will continue updating model resources on Quark Drive:
👉 https://pan.quark.cn/s/20c6f6f8d87b
These resources are mainly prepared for local users, making creation and learning more convenient.
⚙️ 在线体验工作流
👉 工作流: https://www.runninghub.ai/post/2029507218332196866/?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV141Pkz4E95/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。
