Photoreal face replacement with prompt-guided control and natural blending
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
FLUX Kontext Face Swap empowers creatives with accurate, natural face replacement using automatic alignment, refined control, and localized regeneration.
Important nodes:
Key nodes in Comfyui FLUX Kontext Face Swap workflow
AutoCropFaces (#119 and #122)
Detects faces and produces crop metadata for alignment and pasting. If the face is partially missed or includes hair you do not want, increase crop size slightly or lower detection confidence to pick up more context.
FaceAlign (#121)
Uses InsightFace landmarks to warp the source face onto the base face geometry before merging. Switch the analysis device in FaceAnalysisModels (#120) to GPU when available for faster alignment.
Image Paste Face (#125)
Blends the aligned face into the base image using the crop data. If edges look sharp or color is off, try a slightly larger crop box or reduce any post-prompt aggressiveness so FLUX does less overpainting around the border.
FluxKontextImageScale (#134)
Rescales the composite to the native shape expected by Kontext so the VAE can encode without distortion. Leave this in place to prevent stretching or drift in the refined output.
UNETLoader (#140)
Loads the Kontext-tuned FLUX UNet. Use this together with the LoRA for the intended behavior of FLUX Kontext Face Swap. Changing the checkpoint will noticeably alter skin texture and overall fidelity.
LoraLoaderModelOnly (#141)
Applies Put it here LoRA to localize reconstruction. If the swap drifts or edits spill outside the face, increase the LoRA influence slightly. If the look feels locked, reduce it for more creative freedom.
DualCLIPLoader (#8) and CLIPTextEncode (#6)
Provide text conditioning. Keep prompts short and targeted to the face region and expression. Avoid global style cues if you want to preserve the base image background and clothing.
FluxGuidance (#5)
Balances how much the sampler trusts the reference composite. Raise it to preserve the base composition more tightly, lower it for stronger prompt-driven edits within the face area.
Notes
FLUX Kontext Face Swap | ComfyUI workflow — see RunComfy page for the latest node requirements.
Description
Initial release — FLUX-Kontext-Face-Swap.