CivArchive
    Z-Image Finetuned Models in ComfyUI | Multi-Style Image Workflow - v1.0

    Create stunning, detailed images across multiple styles and moods easily.

    Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.

    Open preloaded workflow on RunComfy

    Open preloaded workflow on RunComfy (browser)

    Why RunComfy first
    - Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
    - Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
    - Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.

    When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.

    How to use (local ComfyUI)
    1. Load inputs (images/video/audio) in the marked loader nodes.
    2. Set prompts, resolution, and seeds; start with a short test run.
    3. Export from the Save / Write nodes shown in the graph.

    Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.


    Overview

    With this workflow, you can explore a collection of specialized model variations optimized for different visual themes and artistic styles. Generate realistic portraits, cinematic shots, or anime-inspired imagery with fine control over detail and tone. The workflow streamlines testing and comparison of finetuned models for efficient experimentation. Integration of optimized UNet loaders and CFG normalization enhances visual consistency. LoRA options allow precise style blending. Perfect for artists and AI explorers seeking reliable, high-quality outputs. Unlock consistent, beautifully detailed visuals across multiple finetuned checkpoints.

    Important nodes:

    Key nodes in Comfyui Z-Image Finetuned Models workflow

    • ModelSamplingAuraFlow (#76, #84)

    • Purpose: patches the model to use an AuraFlow‑compatible sampling path that is stable at very low step counts. The shift control subtly adjusts sampling trajectories; treat it as a finesse dial that interacts with your sampler choice and step budget. For best comparability across lanes, keep the same sampler and adjust only one variable (e.g., shift or LoRA weight) per test. Reference: AuraFlow pipeline background and scheduling notes. Docs

    • CFGNorm (#64, #65, #66, #67)

    • Purpose: normalizes classifier‑free guidance so contrast and detail do not swing wildly when you change models, steps, or schedulers. Increase its strength if highlights wash out or textures feel inconsistent between lanes; reduce it if images start to look overly compressed. Keep it similar across branches when you want a clean A/B of Z-Image Finetuned Models.

    • LoraLoaderModelOnly (#106)

    • Purpose: injects a LoRA adapter directly into the loaded UNet without altering the base checkpoint. The strength parameter controls stylistic impact; lower values preserve base realism while higher values impose the LoRA’s look. If a LoRA overpowers faces or typography, reduce its weight first, then fine‑tune prompt phrasing.

    • KSampler (#78, #85, #89, #93)

    • Purpose: runs the actual diffusion loop. Choose a sampler and scheduler that pair well with few‑step distillations; many users prefer Euler‑style samplers with uniform or multistep schedulers for Turbo‑class models. Keep seeds fixed when comparing lanes, and change only one variable at a time to understand how each finetune behaves.

    Notes

    Z-Image Finetuned Models in ComfyUI | Multi-Style Image Workflow — see RunComfy page for the latest node requirements.

    Description

    Initial release — Z-Image-Finetuned-Models.

    Workflows
    Other

    Details

    Downloads
    25
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/1/2026
    Updated
    4/3/2026
    Deleted
    -

    Files

    zImageFinetunedModelsIn_v10.zip

    Mirrors

    Huggingface (1 mirrors)