Blend and style multiple images with next-level control.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
With this workflow, you can merge, transform, and enhance multiple reference images to achieve cohesive, high-quality compositions. It uses the Qwen model to deliver advanced multi-image blending, background control, and prompt-driven style adjustments. Ideal for designers and digital artists seeking refined creative control, it streamlines complex edits while maintaining natural detail. Perfect for crafting dynamic visuals or experimenting with multiple sources effortlessly.
Important nodes:
Key nodes in Comfyui Nunchaku Qwen Image workflow
NunchakuQwenImageDiTLoader (#115)
Loads the Qwen image weights and variant used by the branch. Select the edit model for photo‑guided edits or the base model for text‑to‑image. When VRAM allows, higher‑precision or higher‑resolution variants can yield more detail; lighter variants prioritize speed.
TextEncodeQwenImageEditPlus (#111)
Drives multi‑image edits by parsing your instruction and binding it to up to three references. Keep directives explicit about which image contributes which attribute. Use concise phrasing and avoid conflicting goals to keep edits focused.
TextEncodeQwenImageEditPlus (#110)
Acts as the paired negative or constraint encoder for the multi‑image branch. Use it to exclude objects, styles, or artifacts you do not want to appear. This often helps preserve composition while removing UI overlays or unwanted props.
TextEncodeQwenImageEdit (#121)
Positive instruction for the single‑image edit branch. Describe the desired outcome, surface qualities, and composition in clear terms. Aim for one to three sentences that specify the scene and changes.
TextEncodeQwenImageEdit (#122)
Negative or constraint prompt for the single‑image edit branch. List items or traits to avoid, or describe elements to remove from the source image. This is useful for cleaning stray text, logos, or interface elements.
ImageScaleToTotalPixels (#93)
Prevents oversized inputs from destabilizing results by scaling to a target total pixel count. Use it to harmonize disparate source resolutions before compositing. If you notice inconsistent sharpness between sources, bring them closer in effective size here.
ModelSamplingAuraFlow (#66)
Applies a DiT/flow‑matching sampling schedule tuned for the Qwen image models. If outputs look dark, mushy, or lack structure, increase the schedule’s shift to stabilize global tone; if they look flat, reduce the shift to chase extra detail.
…
Notes
Nunchaku Qwen Image in ComfyUI | Multi-Image Merge & Style Edit — see RunComfy page for the latest node requirements.
Description
Initial release — Nunchaku-Qwen-Image.