CivArchive
    Qwen 2511 AnyLight: One-Click Lighting Transfer from Image 2 to Image 1 Workflow - v1.0
    NSFW
    Preview 130156194

    Qwen 2511 AnyLight is a ComfyUI workflow designed for image-based lighting transfer, relighting, light-and-shadow enhancement, and reference-driven visual reconstruction. The core idea is simple: use image 1 as the main target image, use image 2 as the lighting reference, and let Qwen Image Edit 2511 transfer the lighting style, shadow direction, highlight structure, and overall atmosphere from the reference image onto the target image.

    This workflow is especially useful when you already have a good image but the lighting is flat, weak, inconsistent, or not cinematic enough. Instead of regenerating the whole image from scratch, the workflow focuses on applying a new lighting condition while preserving the main subject, composition, identity, pose, camera framing, and scene structure as much as possible. It can be used for portraits, product images, character renders, fashion images, interior scenes, cinematic stills, social media covers, and AI-generated images that need stronger light direction and depth.

    The workflow is built around Qwen Image Edit 2511 and the AnyLight LoRA route. It uses qwen_image_edit_2511_bf16 as the main image editing model and QIE-2511-AnyLight as the specialized lighting-transfer LoRA. The workflow also uses the Qwen image VAE, Qwen vision-language text encoder, TextEncodeQwenImageEditPlus, KSampler, CFGNorm, FluxKontextMultiReferenceLatentMethod, and related reference-latent processing nodes. This combination makes the workflow suitable for multi-image editing, where the target image and lighting reference image are both used inside the conditioning process.

    A key part of this workflow is the reference-image structure. Image 1 is the target image that should be preserved. Image 2 provides the lighting style. Depending on the setup, image 2 can be a real photo, a lighting reference render, a Blender lighting pass, a material preview, a studio lighting example, or a cinematic scene with the desired light direction. The workflow then asks Qwen 2511 to apply the lighting from image 2 to image 1. This makes it much more controllable than writing only text prompts like “cinematic lighting” or “dramatic shadows,” because the model receives a direct visual reference for the lighting condition.

    The workflow can also work well with Blender-generated lighting images. This is useful for creators who want more precise control over lighting direction, highlight placement, shadow shape, rim light, fill light, and mood. You can create or render a simple lighting reference in Blender, then use it as image 2. The workflow will use that lighting image as a visual guide and transfer the light-and-shadow feeling to the target image. This is why the workflow is useful for AnyLight-style relighting: it gives users a way to control image lighting through a visual reference rather than relying only on prompt words.

    The TextEncodeQwenImageEditPlus node is central to the workflow. It accepts the prompt, VAE, image 1, image 2, and optional image 3 inputs. This allows the model to understand both the target image and the lighting reference. A simple prompt such as “Apply the lighting from image 2 to image 1” can already describe the task clearly. For more advanced use, the prompt can also describe the desired scene, subject preservation, lighting direction, color temperature, contrast level, shadow softness, and final photographic style.

    The workflow also includes FluxKontextMultiReferenceLatentMethod nodes with reference-latent method settings. These nodes help manage how reference images are injected into the conditioning pipeline. In practical terms, they support stronger reference control and help the workflow understand that the second image is not a new subject to insert, but a lighting and shadow reference to transfer.

    CFGNorm is used to stabilize the model behavior during the editing process. This can help reduce over-guidance problems and keep the output more balanced. In relighting tasks, excessive guidance can easily damage identity, change facial structure, alter clothing, or over-transform the background. CFGNorm helps keep the edit more controlled.

    The workflow also includes KSampler settings and lightning LoRA logic. The note inside the workflow suggests that standard Qwen settings may use higher step counts, while the Lightning LoRA route can work with much fewer steps. This is useful for fast testing. You can preview lighting transfer quickly, then increase the steps or adjust the prompt if you want a more refined final result.

    This workflow is useful for both practical image editing and creative testing. It can turn a flat portrait into a more cinematic portrait, add studio-style light and shadow to an AI-generated image, transfer warm café lighting to a character scene, apply dramatic rim light to a product shot, or use a Blender light setup to drive the mood of a final image. It is also useful for before-and-after comparison posts, Civitai workflow demonstrations, YouTube tutorials, and RunningHub online workflow showcases.

    Main features:

    - Qwen Image Edit 2511 relighting workflow

    - AnyLight LoRA support

    - One-click lighting transfer from image 2 to image 1

    - Supports Blender lighting reference images

    - Image 1 as target image, image 2 as lighting reference

    - TextEncodeQwenImageEditPlus multi-image conditioning

    - Qwen image VAE and Qwen visual text encoder support

    - FluxKontextMultiReferenceLatentMethod reference control

    - CFGNorm stabilization

    - KSampler-based image editing pipeline

    - Supports lighting, shadow, highlight, and mood transfer

    - Useful for portraits, product images, interiors, characters, and cinematic images

    - Suitable for fast online testing on RunningHub

    - Practical for Civitai showcases and AIGC workflow publishing

    Recommended use cases:

    Lighting transfer, image relighting, portrait light enhancement, cinematic shadow generation, Blender light pass transfer, product lighting adjustment, studio lighting simulation, character render relighting, fashion image enhancement, café lighting transfer, warm light and cool shadow editing, dramatic rim light generation, AI image post-production, flat image improvement, social media cover polishing, Civitai example image preparation, and RunningHub workflow demonstration.

    Suggested workflow:

    Prepare image 1 as the target image. This should be the image you want to relight. A clear portrait, product image, character scene, or clean render works best. The subject should be visible, the composition should already be acceptable, and the image should not be too blurry or heavily compressed.

    Prepare image 2 as the lighting reference. This image should clearly show the lighting direction, shadow pattern, contrast level, color temperature, and atmosphere you want to transfer. It can be a real photo, a Blender lighting render, a material preview, a cinematic frame, or a generated lighting reference. The clearer the lighting information in image 2, the easier it is for the workflow to transfer the desired effect.

    Use a direct prompt first. A simple instruction such as “Apply the lighting from image 2 to image 1” is often enough for testing. If you need stronger control, add preservation rules. For example: “Apply the lighting from image 2 to image 1 while preserving the subject identity, pose, clothing, composition, camera angle, background structure, and facial features.” This helps reduce unwanted changes.

    For portrait relighting, describe the lighting quality. You can mention soft side light, warm sunlight, cool rim light, cinematic contrast, natural skin texture, realistic shadows, catchlights in the eyes, or professional studio photography. For product images, describe material realism, reflection control, highlight placement, clean shadow, and premium advertising style. For character images, describe dramatic light direction, background atmosphere, and subject preservation.

    If the output changes the subject too much, simplify the prompt and add stronger preservation language. If the lighting transfer is too weak, make image 2 more visually clear or use a more direct lighting prompt. If the result becomes over-contrasted, ask for softer shadows and natural light. If the face or product details are damaged, reduce the strength of the edit or use a cleaner target image.

    For Blender-driven lighting, create a simple reference image with the intended light direction and shadow pattern. You do not need a complex final render. The lighting reference mainly needs to communicate where the key light, fill light, rim light, and shadows should appear. Then use that reference as image 2 and let the workflow transfer the lighting logic to image 1.

    This workflow is designed for creators who want stronger control over lighting without rebuilding the image from zero. It is especially useful when the image composition is already good, but the light direction and shadow depth are not strong enough. With Qwen Image Edit 2511, AnyLight LoRA, multi-reference conditioning, and visual lighting guidance, this workflow provides a practical way to turn ordinary images into more polished, cinematic, and production-ready visuals.

    🎥 YouTube Video Tutorial

    Want to know what this workflow actually does and how to start fast?

    This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.

    Everything starts directly on RunningHub, so you can experience it in action first.

    👉 YouTube Tutorial: https://youtu.be/zeD5FLeon9U

    Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.

    ⚙️ RunningHub Workflow

    Try the workflow online right now — no installation required.

    👉 Workflow: https://www.runninghub.ai/post/2014246399210164226?inviteCode=rh-v1111

    If the results meet your expectations, you can later deploy it locally for customization.

    🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!

    📺 Bilibili Updates (Mainland China & Asia-Pacific)

    If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.

    📺 Bilibili Video: https://www.bilibili.com/video/BV1drzNBTE8M/

    ☕ Support Me on Ko-fi

    If you find my content helpful and want to support future creations, you can buy me a coffee ☕.

    Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.

    👉 Ko-fi: https://ko-fi.com/aiksk

    💼 Business Contact

    For collaboration or inquiries, please contact aiksk95 on WeChat.

    🎥 YouTube 视频教程

    想了解这个工作流到底是怎样的工具,以及如何快速启动?

    视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。

    我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。

    👉 YouTube 教程: https://youtu.be/zeD5FLeon9U

    开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。

    ⚙️ 在线体验工作流

    现在就可以在线体验,无需安装。

    👉 工作流: https://www.runninghub.ai/post/2014246399210164226?inviteCode=rh-v1111

    打开上方链接即可直接运行该工作流,实时查看生成效果。

    如果觉得效果理想,你也可以在本地进行自定义部署。

    🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

    📺 Bilibili 更新(中国大陆及南亚太地区)

    如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

    📺 B站视频: https://www.bilibili.com/video/BV1drzNBTE8M/

    我会在 夸克网盘 持续更新模型资源:

    👉 https://pan.quark.cn/s/20c6f6f8d87b

    这些资源主要面向本地用户,方便进行创作与学习。

    Description

    Workflows
    Qwen 2

    Details

    Downloads
    31
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/9/2026
    Updated
    5/14/2026
    Deleted
    -

    Files

    qwen2511AnylightOneClick_v10.zip

    Mirrors