Part 3 of my Lab series. Checkout my other workflows on my profile, they can all be used in conjunction!
Sorry for the clutter! - subgraphs seem to break during upload
Unpacked - removed all subgraphs since they glitched sometimes. Everything displayed on 1st layer canvas
⚡ swapLab Flux 2 Klein
The Ultimate Dual-Image Characteristic Transfer Workflow for ComfyUI
Load two images. Pick what to transfer. Watch the magic happen.
SwapForge is a ComfyUI workflow that takes any two photos and seamlessly transfers faces, hair, clothes, and more from one person onto another — producing composites so clean they look like they were shot that way. No Photoshop. No manual masking. No fuss.
🚀 Why swapLab is so effective
⚡ Powered by Flux 2 Klein 9B — Fast Doesn't Mean Cheap
Forget waiting minutes for mediocre results. swapLab runs on Flux 2 Klein 9B — one of the most advanced open-source image editing models on the planet. Klein's reference-guided architecture was built for exactly this kind of task, delivering cinema-quality swaps at speeds that will make your jaw drop. Iterate faster. Create more. Wait less.
📐 Any Image. Any Size. Zero Prep.
Got images from different cameras, different aspect ratios, different resolutions? Don't touch them. swapLab's intelligent autoscaling system reads your input dimensions and automatically calibrates every part of the pipeline — latent canvas, preprocessor resolution, generation size — all of it. Just drop your images in and go.
🎯 You Stay In Control
swapLab works beautifully with zero prompt input — but when you want to get specific, the guided prompting system lets you steer the output with natural language. Want to preserve a particular detail? Emphasize something? Tweak the vibe? Just tell it. Your creative vision, turbocharged by AI.
🔒 ReActor Face Lock — Identity That Actually Sticks
Generative models drift. Faces shift. Not here. After every swap, swapLab runs a ReActor face reinforcement pass using your original source image — mathematically anchoring the final face to your reference identity. The person in the output is the person in your photo. No blending artifacts. No uncanny valley. Just a face that holds.
🎛️ Swap Modes
Mode What Transfers
👤 Face & Hair Facial features, expression, hairstyle
👗 OutfitFull clothing transfer, accessories
🧍 Full Body Complete person into new context
✨ Custom You pick — any combo of face, hair, body, clothes, accessories, background
🛠️ Under the Hood
🧠 Flux 2 Klein 9B — reference-guided generation engine
🎭 PersonMaskUltra V2 + SAM3 — surgical region detection and masking
📏 Intelligent Autoscaling — universal image compatibility
💬 Guided Prompting — natural language creative control
🔐 ReActor — identity-locking face reinforcement pass
Description
Workflow without Subgraphs, everything is unpacked and visible on the 1st level canvas
FAQ
Comments (8)
thank you!
but the subgraph is broken. maybe the bug of comfyui version.
oh possibly, I am running on v0.16 if that makes a difference. I also had some issues with the subgraphs being buggy. Does unpacking them help? Otherwise I can remake the workflow with only explicit connections
An unpacked version has been added with fixed wiring
Not working. There is no output. The nodes "Marked for Removal" and "Preview Image" are not even connected. Masking works but no result.
Did you use the unpacked workflow? The first workflow was corrupted from subgraph usage on first upload. The unpacked version has all subgraphs expanded and wiring fixed
I just downloaded the unpacked version and re-imported into my local comfyui. All preview image nodes were connected. By default the swap settings are all set to False. Make sure to select "true" for things to be marked for removal and insertion. Let me know if it works for you
Doesn't the upscale factor need to be connected to anything?
Sorry for the confusion, that was left over after the subgraph corruption. The upscaling automatically limits the input images to a max of 1 megapixel in node #768 "Change the size" below the Load SAM3 Model node. Then the masked pieces are upscaled to the nearest 1.5 megapixels in node #781 "ImageScaleToTotalPixels"
That upscale factor node was used to extract the Smallest dimension in the image and allow the user to specify to what resolution they wanted to limit to instead of the hard defined 1 megapixel. You can still use this node if you would like. It makes the dimensional analysis variables available to you.
TL;DR Its left over from something else. To not fry your pc the images are limited to 1 megapixel already. It's probs easier to upscale the final image. I should have a workflow for some good upscale methods in my profile page, or just use lancoz/4xESRAN
