This workflow is designed for FireRed-Image-Edit-1.0 single-image editing with a more divergent and creative transformation style. Compared with the anti-drift version, this workflow is more suitable when you want to keep the core subject recognizable while allowing the scene, clothing, environment, atmosphere, and visual concept to change more aggressively. It is useful for turning one source image into a new cinematic concept, fantasy scene, sci-fi poster, commercial visual, character redesign, or creative image-editing result.
The workflow uses FireRed-Image-Edit-1.0_fp8_e4m3fn.safetensors as the main image editing model, with qwen_2.5_vl_7b_fp8_scaled.safetensors as the Qwen image text encoder and qwen_image_vae.safetensors as the VAE. It also applies the Qwen-Image-Lightning 8-step LoRA route, making the generation process faster and more practical for repeated testing. The sampling configuration uses an 8-step, low-CFG editing route, which is useful for fast iteration when testing larger creative changes.
The core of the workflow is built around TextEncodeQwenImageEditPlus, multi-reference image input, FluxKontextMultiReferenceLatentMethod, ReferenceLatent-style conditioning, CFGNorm, ModelSamplingAuraFlow, VAEEncode, KSampler, and VAEDecodeTiled. These nodes work together to let the model understand the input image, read the editing instruction, preserve key image information where needed, and still allow strong visual transformation.
The included prompt example shows the intended use clearly. It asks the workflow to transform a half-body portrait into an astronaut performing an EVA spacewalk outside a spacecraft. The instruction keeps the person’s identity, facial structure, gaze, expression, head direction, and pose stable, while changing the clothing into a realistic white EVA spacesuit, adding a transparent helmet visor, replacing the background with outer space, Earth’s curve, spacecraft surfaces, and solar panels, and rebuilding the lighting with strong sunlight plus blue Earth-reflected fill light.
This makes the workflow suitable for “controlled divergence”: it is not a simple color edit, and it is not a full random regeneration. It sits between both. The user can ask for major creative changes while still writing preservation rules for identity, posture, texture, lighting consistency, and edge blending. This is especially useful for AI portrait transformation, product concept editing, cosplay-style redesign, sci-fi conversion, fashion restyling, cinematic poster creation, and social media cover generation.
ImageScaleToTotalPixels and GetImageSize help align the image scale and latent canvas with the source image. VAEEncode prepares the source image as a latent reference, while FluxKontextMultiReferenceLatentMethod helps control how the reference images affect the edit. CFGNorm stabilizes guidance, and VAEDecodeTiled helps decode the final result more safely. Image Comparer is included for before-and-after inspection, making it easier to check whether the edit changed the intended areas without completely losing the original subject.
In short, this workflow is for creators who want stronger creative editing than a conservative anti-drift workflow, but still need enough control to keep the original subject readable. If you want to see how the prompt is structured, how the reference images are connected, and how the final divergent edit is produced, watch the full tutorial from the YouTube link above.
⚙️ Try the Workflow Online
👉 Workflow: https://www.runninghub.ai/post/2024514345513787393?inviteCode=rh-v1111
Open the link above to run the workflow directly online and view the generation results in real time.
If the results meet your expectations, you can also deploy it locally for further customization.
🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1R3ZBBrEik/
I will continue updating model resources on Quark Drive:
👉 https://pan.quark.cn/s/20c6f6f8d87b
These resources are mainly prepared for local users, making creation and learning more convenient.
⚙️ 在线体验工作流
👉 工作流: https://www.runninghub.ai/post/2024514345513787393?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1R3ZBBrEik/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。
