This ComfyUI workflow is designed for Qwen Image Edit 2511 local inpainting, object transfer, reference-based editing, and redraw-then-paste-back image production. The main goal of this workflow is to let users modify only a selected area of the main image, transfer visual elements from a reference image, generate a clean edited result, and then paste the edited region back into the original image layout. This makes it much more practical than full-frame image editing, especially when the original image is already good and only one object, region, clothing area, product element, background section, or visual style needs to be changed.
The workflow is built around Qwen Image Edit 2511, using qwen_image_edit_2511_fp8mixed.safetensors as the main editing model, Qwen-Edit-2511-Lightning-4steps-V1.0-bf16.safetensors as the acceleration LoRA, qwen_2.5_vl_7b_fp8_scaled.safetensors as the vision-language text encoder, and qwen_image_vae.safetensors as the VAE. This combination gives the workflow strong image understanding, instruction-following ability, reference-image support, and fast local editing performance.
The core concept is “redraw and paste back.” In many image editing workflows, a model edits the whole image directly. That can be convenient, but it also creates a common problem: the face changes, the background shifts, the original composition drifts, the camera angle changes, or unrelated parts of the image are modified. This workflow is designed to reduce that problem. It crops or isolates the target region, sends that region into Qwen 2511 for local editing, then restores the edited result back into the original image using crop-box restoration logic.
This makes the workflow suitable for “万物迁移” style editing. A reference image can provide an object, style, texture, material, design, color, or visual identity, while the main image provides the target scene and composition. The workflow can then transfer the reference element into the masked region of the main image. For example, it can transfer a product, clothing style, accessory, logo, material texture, decorative pattern, prop, or object identity into another image while keeping the original scene more stable.
The first stage of the workflow prepares the input images and mask. The main image is loaded, resized, and prepared for editing. The mask defines the region that should be changed. The workflow includes mask processing tools such as MaskGrow, mask preview, and mask-to-image conversion. These tools help users check whether the selected area is correct before sending it into the model. A good mask is critical for local editing: if the mask is too small, the new element may not blend naturally; if it is too large, the edit may affect too much of the image.
The workflow also includes background removal and object isolation logic through Image Rembg. This is useful when the reference object needs to be separated from its original background before being transferred. For example, if you want to migrate a product or subject from one image into another, background removal can help isolate the key object and reduce unwanted reference contamination. This makes the reference cleaner and easier for Qwen 2511 to interpret.
TTP_Expand_And_Mask is used to expand and prepare the object region. This is useful when the edited area needs extra surrounding space for blending or when a transparent object/reference needs to be placed into a larger canvas. It helps create a more controlled input for local editing and avoids harsh boundary problems.
The workflow uses QwenEditConfigPreparer to prepare reference configuration. This node manages how images are passed into Qwen Image Edit as reference images and vision-language images. It supports reference resizing, longest-edge control, crop behavior, and visual-language resizing. This is important because Qwen Image Edit 2511 needs to understand both the main image and the reference image. If the reference size or crop is poorly prepared, the model may misunderstand what should be transferred.
The QwenEditAdaptiveLongestEdge node helps adapt the reference or main image size according to the image dimensions. This makes the workflow more flexible across different input resolutions. Instead of forcing every image into the same static size, it can adapt the longest edge to a controlled value, which is useful for high-resolution edits and different aspect ratios.
The central text-conditioning node is TextEncodeQwenImageEditPlusCustom_lrzjason. It receives the Qwen text encoder, VAE, prepared image configs, and the user instruction. The instruction describes how the image should be edited. In this workflow, the instruction logic is designed to analyze the input image, describe key visual features such as color, shape, size, texture, objects, and background, then apply the user’s modification instruction while keeping consistency with the original image where appropriate. This is very suitable for reference-based migration and local transformation tasks.
The workflow also uses ConditioningZeroOut for negative conditioning. This can help keep the generation process cleaner and reduce unwanted negative interference when using the Qwen edit conditioning structure. The model route then passes through ModelSamplingAuraFlow and CFGNorm. CFGNorm helps stabilize the guidance behavior, which is especially useful in image editing workflows where too much guidance can cause over-editing or identity drift.
The sampling stage uses Qwen Image Edit 2511 with Lightning 4-step acceleration. The workflow is designed for fast local editing rather than long slow sampling. The included KSampler route uses a short-step setup with Euler-style sampling and a low CFG-style setting. This makes the workflow suitable for quick iteration. Users can test masks, references, and instructions quickly before finalizing the best result.
After the Qwen edit result is generated, QwenEditOutputExtractor extracts the main image, mask, reference outputs, VAE images, and related edit outputs. QwenEditListExtractor and QwenEditAny2Image are used to extract reference-related images from the output list and convert them back into image format for preview or comparison. This makes the workflow transparent: users can inspect the main image, reference, mask, and generated result rather than only seeing the final output.
The paste-back stage is one of the most important parts of the workflow. CropWithPadInfo and RestoreCropBox allow the workflow to return the edited crop back to the original image position. This means the model can work on a focused local area, while the final output still preserves the original full image. This is the practical advantage of a redraw-and-paste-back pipeline: it gives the model enough room to edit locally, but avoids unnecessary changes to the whole picture.
The workflow also includes ImageReel and ImageReelComposit nodes. These create a visual comparison layout showing the main image, reference image, main image mask, and final result. This is very useful for Civitai, RunningHub, Bilibili, and YouTube demonstrations because the viewer can immediately understand what the workflow is doing. Instead of only showing a final result, the workflow can show the full logic: source image, reference, mask, and output.
Image Comparer is also included for before-and-after checking. This helps users inspect whether the paste-back result is aligned, whether the edited region blends well, whether the object migration succeeded, and whether unrelated areas were preserved. This is important for practical editing because a visually impressive edit is not enough if it damages the original image.
This workflow is especially useful for commercial and creator workflows. It can be used for product replacement, clothing transfer, accessory transfer, hairstyle modification, object migration, logo placement, prop transfer, material replacement, local style conversion, character detail correction, and social media cover editing. It is also useful for AI image post-production, where a generated image is mostly good but one section needs to be repaired or redesigned.
Main features:
- Qwen Image Edit 2511 local inpainting workflow
- Redraw-and-paste-back editing structure
- Uses qwen_image_edit_2511_fp8mixed.safetensors
- Uses Qwen-Edit-2511 Lightning 4-step LoRA
- Qwen 2.5 VL 7B FP8 text encoder support
- Qwen image VAE support
- Reference-based object and style transfer
- Mask-based local editing
- Image Rembg background removal for object isolation
- MaskGrow for stronger local selection control
- TTP_Expand_And_Mask for expanded object preparation
- QwenEditConfigPreparer for reference and VL image configuration
- TextEncodeQwenImageEditPlusCustom for instruction-based editing
- QwenEditOutputExtractor for output management
- CropWithPadInfo and RestoreCropBox for paste-back restoration
- ImageReel comparison layout
- Image Comparer before/after inspection
- Suitable for “万物迁移”, object migration, and local image post-production
Recommended use cases:
Local inpainting, object transfer, product replacement, clothing transfer, accessory migration, logo transfer, material transfer, reference-based redesign, masked region editing, AI image repair, background object replacement, character detail correction, product mockup creation, fashion image editing, commercial visual testing, social media cover correction, Civitai workflow demonstration, RunningHub online workflow publishing, and before/after editing showcases.
Suggested workflow:
Start by preparing the main image. This is the image you want to edit. Choose a source image with a clear composition and a visible target region. The workflow works best when the original image is already acceptable and only a local part needs to be changed.
Next, prepare the reference image. The reference image should clearly show the object, texture, material, style, logo, or visual element that you want to transfer. If the reference contains too much background noise, the workflow can use background removal to isolate the subject. A clean reference usually gives better migration results.
Create or load the mask on the main image. The mask should cover the area where the new element should appear. For object transfer, the mask should be slightly larger than the object area so the model has enough space to blend edges naturally. For precise edits, keep the mask tight but not too narrow.
Use the mask preview and mask grow tools to check the selected area. If the mask edge is too hard, increase blur or grow the mask slightly. If the mask covers unrelated parts, reduce it. Mask quality strongly affects final quality.
Write a clear editing instruction. A good instruction should explain what should be transferred and what should be preserved. For example: “Transfer the object style from the reference image into the masked area of the main image, while preserving the original background, camera angle, lighting, and subject identity.” For product edits, include material, placement, color, and scale. For clothing edits, include garment type, fabric, fit, and style.
Keep preservation language in the prompt. Since this is a local editing workflow, it is important to tell the model what should not change. Mention that the original composition, face, pose, background, lighting direction, and non-masked regions should remain unchanged where appropriate.
Run the Qwen 2511 edit stage. The Lightning 4-step route is designed for fast iteration, so it is practical to test several prompts and masks quickly. If the result is too weak, make the instruction more direct or expand the mask. If the result changes too much, simplify the instruction, improve the reference image, or reduce the edited region.
After generation, inspect the output before paste-back. Check whether the transferred object or style matches the reference, whether the object fits the target region, and whether lighting and perspective are believable. If the local crop looks good, continue to the paste-back stage.
Use the RestoreCropBox stage to paste the edited crop back into the original image. Check alignment carefully. The final image should preserve the original full-frame structure while replacing only the target region. If the pasted area has a visible seam, adjust the mask, blur, crop padding, or reference preparation.
Use ImageReel and Image Comparer to evaluate the workflow result. The best demonstration should show the main image, reference image, mask, and final result together. This makes the workflow easier to understand for viewers on Civitai, RunningHub, YouTube, and Bilibili.
For commercial use, test several references and prompts before choosing the final output. Some objects transfer better than others depending on shape, lighting, texture, and reference clarity. For logos or text-heavy objects, check readability carefully. For clothing, check body shape and fabric continuity. For product images, check material realism, reflection, and perspective.
This workflow is designed for creators who need precise local editing rather than full-image regeneration. It is especially useful when the goal is not to create a completely new image, but to modify one important part while preserving everything else. With Qwen Image Edit 2511, Lightning LoRA acceleration, reference configuration, mask control, object isolation, crop editing, and paste-back restoration, this workflow provides a practical pipeline for local repainting and “all-things migration” style image editing.
🎥 YouTube Video Tutorial
Want to know what this workflow actually does and how to start fast?
This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.
Everything starts directly on RunningHub, so you can experience it in action first.
👉 YouTube Tutorial: https://youtu.be/nlkrfEaScM0
Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.
⚙️ RunningHub Workflow
Try the workflow online right now — no installation required.
👉 Workflow: https://www.runninghub.ai/post/2017885601479004162/?inviteCode=rh-v1111
If the results meet your expectations, you can later deploy it locally for customization.
🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1SK68BwE33/
☕ Support Me on Ko-fi
If you find my content helpful and want to support future creations, you can buy me a coffee ☕.
Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.
👉 Ko-fi: https://ko-fi.com/aiksk
💼 Business Contact
For collaboration or inquiries, please contact aiksk95 on WeChat.
🎥 YouTube 视频教程
想了解这个工作流到底是怎样的工具,以及如何快速启动?
视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。
我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。
👉 YouTube 教程: https://youtu.be/nlkrfEaScM0
开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。
⚙️ 在线体验工作流
现在就可以在线体验,无需安装。
👉 工作流: https://www.runninghub.ai/post/2017885601479004162/?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1SK68BwE33/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。
