This workflow is built for FireRed-Image-Edit-1.0 single-image editing with stronger anti-drift control. Its main purpose is to let users modify a specific visual element in an image while keeping the original composition, object scale, lighting direction, texture logic, and overall image structure as stable as possible. It is especially useful when you want to edit one part of an image without turning the whole picture into a new generation.
The workflow is based on FireRed-Image-Edit-1.0_fp8_e4m3fn.safetensors as the main editing model, combined with qwen_2.5_vl_7b_fp8_scaled.safetensors as the Qwen image text encoder and qwen_image_vae.safetensors as the VAE. It also uses a Qwen-Image Lightning 8-step LoRA route, which makes the editing process faster and more practical for repeated testing. The workflow note shows that users can compare different sampling settings, but the included configuration uses a fast 8-step route for efficient image editing.
The key strength of this workflow is anti-drift editing. Many image-editing workflows can follow a prompt, but they often change too much: the background shifts, the object size changes, the texture is replaced incorrectly, the lighting direction breaks, or the original composition drifts. This workflow reduces that problem by combining input-image conditioning, ReferenceLatent, CFGNorm, ModelSamplingAuraFlow, and controlled KSampler settings.
The user loads a main image, then writes a clear editing instruction. Example instructions in the workflow include changing a glass cup into a ceramic cup while keeping the same volume, or changing the clothing color while preserving the original fabric texture, folds, and highlights. This shows the intended use case clearly: not full redesign, but precise image modification.
The TextEncodeQwenImageEditPlus node is used to encode the editing instruction together with image inputs. ImageScaleToTotalPixels helps normalize the image scale before editing, while GetImageSize and EmptySD3LatentImage keep the latent canvas aligned with the source image dimensions. This helps preserve the original image layout and reduces unwanted resizing or framing changes.
ReferenceLatent is one of the important anti-drift components. It reinforces the source-image structure during generation, helping the model understand that the edit should happen within the existing image context. CFGNorm stabilizes the model behavior, while the low-CFG 8-step KSampler route keeps the editing process direct and efficient.
The output stage uses VAEDecodeTiled and Image Comparer. VAEDecodeTiled helps decode the result in a memory-friendly way, and Image Comparer lets users inspect the before-and-after result directly. This is very useful for checking whether the edit succeeded without damaging non-target areas.
This workflow is suitable for product replacement, material replacement, color editing, object transformation, clothing color changes, texture-preserving edits, commercial image correction, social media image cleanup, and Civitai workflow demonstrations. If you want to see how the anti-drift structure is built, how the prompt should be written, and how the before/after comparison works, watch the full tutorial from the YouTube link at the top.
⚙️ Try the Workflow Online
👉 Workflow: https://www.runninghub.ai/post/2024514356171509762?inviteCode=rh-v1111
Open the link above to run the workflow directly online and view the generation results in real time.
If the results meet your expectations, you can also deploy it locally for further customization.
🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1R3ZBBrEik/
I will continue updating model resources on Quark Drive:
👉 https://pan.quark.cn/s/20c6f6f8d87b
These resources are mainly prepared for local users, making creation and learning more convenient.
⚙️ 在线体验工作流
👉 工作流: https://www.runninghub.ai/post/2024514356171509762?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1R3ZBBrEik/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。
