Turn 2–3 images into one seamless, edited masterpiece instantly.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
This workflow lets you take 2–3 reference images and transform them into a single, blended output with accuracy and control. You can replace objects, adjust artistic styles, or merge multiple visuals into one scene. It’s designed to give you flexibility through prompt-driven editing. You’ll save time while retaining creative freedom in how sources are composited. The tool is especially useful for creators working on concept designs, visual enhancements, or reference-based modifications. By using this workflow, you unlock more streamlined editing without sacrificing detail or consistency.
Important nodes:
Key nodes in Comfyui Qwen Image Edit 2509 workflow
TextEncodeQwenImageEditPlus (#104)
This node creates the positive editing condition by combining your prompt with up to three reference images via the Qwen encoder. Use it to specify what should appear, which style to adopt, and how strongly references should influence the result. Start with a clear, single‑sentence goal, then add style descriptors or camera cues as needed. Assets for the encoder are packaged in Comfy-Org/Qwen-Image_ComfyUI.
TextEncodeQwenImageEditPlus (#106)
This node forms the negative condition to prevent unwanted traits. Add short phrases that block artifacts, over‑smoothing, or mismatched styles. Keep it minimal to avoid fighting the positive intent. It uses the same Qwen encoder and VAE stack as the positive path.
UnetLoaderGGUF (#102)
Loads the Qwen Image Edit 2509 checkpoint in GGUF format for VRAM‑friendly inference. Higher quantization saves memory but may slightly affect fine detail; if you have headroom, try a less aggressive quant to maximize fidelity. Implementation reference: city96/ComfyUI-GGUF.
LoraLoaderModelOnly (#89)
Applies the Qwen‑Image‑Lightning LoRA on top of the base model to accelerate convergence and strengthen edits. Increase strength_model to emphasize this LoRA’s effect or lower it for subtle guidance. Model page: lightx2v/Qwen-Image-Lightning. Core node reference: comfyanonymous/ComfyUI.
ImageScaleToTotalPixels (#93, #108)
Resizes each input to a consistent total pixel count using high‑quality resampling. Raising the megapixel target yields sharper results at the cost of time and memory; lowering it speeds iteration. Keep both references at similar scales to help Qwen Image Edit 2509 blend elements cleanly. Core node reference: comfyanonymous/ComfyUI.
…
Notes
Qwen Image Edit 2509 in ComfyUI | Multi-Image Merge & Edit — see RunComfy page for the latest node requirements.
Description
Initial release — Qwen-Image-Edit-2509.