This workflow was the Twin of my "Clothes Swap that Really Works". You should check it too! 😉
Each version is a different technique from the other. You should try all of them:
Ancient: make an Inpaint on the target face, using Controlnets, IpAdapter, and ReActor.
Overlay: crop the source face, paste over the target face, and repair corners with Inpaint. This technique does not "reproduce" the source face, it uses exactly the same face.
Square: make a big square Inpaint on the target face.
If you’ve found this workflow, chances are you just want to download it, run it, and get good results. Perfect! I did the same when I first started using ComfyUI. But life is full of unfortunate events, so if after doing that you run into any issues or want to know how things work, come back here — I’ll try to help and tell you a good story.
This workflow was built to work with SD15, but that's just because I have Low VRam. It's supposed to work with any checkpoint SD, SDXL, Pony, Flux, or any other.
If you enjoy my work, give me a like, consider leaving a comment or supporting me. I’d really appreciate it! Buy me a Coffee.
Note: Example images are AI-generated and do not represent real people.
Important tip: ReActor occasionally outputs a completely black image. It’s a really annoying bug. Fortunately, I found a way to fix it: go to the file ComfyUI/custom_nodes/comfyui-reactor/scripts/reactor_sfw . py, and change the line SCORE = 0.96 to SCORE = 5 — the error will never happen again! 😇
Description
FAQ
Comments (31)
Hello! I'm trying to use the v0 workflow. I downloaded the same safetensors that you show in the example, but I keep getting this error: "mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)". I think it means there's an issue between SD15 and SDXL models? I can't see what could be SDXL. Any idea? Thanks!
On which node is this error appearing?
Try checking the flag "force_recriate" on two nodes "Caching Conditioning to not Waste". They are next to the "CLIP Text Encode (Prompt)" nodes.
@alastor_666_1933Â Thank you for your answer! It was on KSampler. I don't know exactly what changed, but it works now. I tried checking every "force recreate" and then unchecking them. I may have downloaded other packages also. Anyway I can begin using the workflow now. Thank you for your work!
@Raisock520Â Great news!
I know the reason for that. This node "Caching Conditioning to not Waste" was built to accelerate the process when you use the same prompt many times. It saves a tensor file with the "CLIP Text Encode" result, so when you execute the same prompt this node skips "CLIP Text Encode".
However, when you change the type of the checkpoint (e.g., SD15 to SDXL), this file becomes incompatible, so it's necessary to check the flag "force_recriate" or change the prompt.
I hadn't thought of that when I developed this node. Thanks for letting me know, I will look for one way to fix it.
Thanks alot for this :)), it just worked and its awesome
Question, I would like to add a LoRA that makes the face-skin more realistic in the end result. It should only be applied to the face, nowhere else. Where in the workflow would I put that? And how?
I've experimented with different places to load in a LoRA for realistic skin texture. And even tried putting the finished result through FaceDetailer. Nothing works for me.
I would appreciate a little help with this!
This workflow is absolutely awesome on high def inputs once you get it up and running tho.
I would need to test. Can you tell me what lora is it? And what workflow you are using (Ancient, Overface or Square)?
alastor_666_1933Â I've had the best results with the Square workflow. And I managed to do a workaround shortly after writing my comment by following the workflow shown in this video: https://www.youtube.com/watch?v=IwbgOj_Y6dY&t=133s&ab_channel=Aiconomist
However, I don't feel it's the best or most efficient way to go about it since it takes the completed image and process that instead of being implemented seamlessly in the workflow. This changes the facial features of the completed image, not just the skin, unfortunately. For me, I think the best place for it to happen would be right before the actual ReActor FastFaceSwap module, on the image it previews before the "merge".
Btw, the solution doesn't have to be a LoRA, but from what I've understood - that's generally how people go about solving fake-skin. But I'm all ears for other solutions of course!
I'll see if I can make it happen while waiting for you to come up with some genuis method to enhance the skin :D
EDIT: ok so I think typing out my problem here kinda made me understand it better and I might have solved it. I went along and put the same workflow shown in the youtube video, but used the outgoing input image from "Set Face Swap Weight" in your workflow as the input image in the PersonMaskUltraV2, and then result image as the ingoing input in FastFaceSwap! Worked much better. Now I just need to find the perfect LoRA stack :D:D:D
How do I exclude the hair in the swap? (Want hair from the original Target to stay the same). Checking/Unchecking hair_mask in "A Person Mask Generator" node doesn't affect anything one way or the other for me.
It's probably because of the "Caching" nodes. You need to locate the "Caching" node after the hair mask segment and check the "force recreate". Alternatively, you can delete all the content of the folder ComfyUi/output/caching_to_not_waste to force the recreation of all caches.
alastor_666_1933Â I tried this, it still takes the hair from the Source Image. I even tried setting "force_recreate" to true on ALL caching nodes + deleted the entire content of caching_to_not_waste folder. What am I doing wrong? Working with the "Square" workflow btw.
Also, thank you so much for responding and helping out! Any way I can send you a donation? This workflow is quite genius.
musikpojken Based on your suggestions, I made some improvements to the Square workflow and added a flag to remove the hair from Mask (check the green lemon node below image input). Download the new version of workflow and check if it's what you want.
About the donation, I think that's here https://ko-fi.com/alastor_666_1933. I'm never receiving a donation. I don't know if this works 😂
Which value should I modify to increase the resolution of the final output image?
Take a look on Image Resize Nodes
Does not work for me. I end up with 24 files of the whole process, and the same source image. Downloaded all nodes correctly I suppose
Just to be sure, the input images you are using have how many faces?
The workflow was designed to process only one face on each input image.
I just cant get the ReActor nodes to work.
Try to install ReActor from this source: https://codeberg.org/Gourieff/comfyui-reactor-node
Doesnt work for me. Some custom nodes failed import and when installed manually comfyui doesnt says its a missing node. Another node says install, while its already installed. Not the fault of the workflow author, simply the custom nodes that dont work. I have no idea how to solve these problems.
Would you tell which nodes were with the problem?
@alastor_666_1933Â sorry for the late reply but its the ReActor nodes.
@panteraleo555491Â Try to install ReActor from this source: https://codeberg.org/Gourieff/comfyui-reactor-node
@alastor_666_1933Â Thanks. Quick update, it has been succesful. I have used reactor with no issue now.
@panteraleo555491Â Hi, I cant, even if i manually install I have ''Install RequiredReActorSetWeight
Install RequiredReActorFaceSwap '' error
got a black screen output, even after applying the score = 5 fix.
Try to install ReActor from this source: https://codeberg.org/Gourieff/comfyui-reactor-node, it's the uncensored version
ipadapter_plus ClipVision model not found. how to fix it ?
Go find CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
There's no widget to load or point to it. It just has to be there.
How did you get the unified loader to work without an ipadapter? Mine throws an error, but no ipadapter downloaded has worked yet.
operands could not be broadcast together with shapes (683,512,1,4) (683,512,4) (683,512,4)
When running it, i was constantly getting a broadcast error and it looked like the node "A Person Mask Generator" was adding an Alpha channel which is what that error is saying. I had to edit the code of a_person_mask_generator_comfyui.py around line 190. Edit the code to this:
if len(masks) == 0:
mask_arrays.append(mask_background_array)
else:
for i, mask in enumerate(masks):
mask_data = np.squeeze(mask.numpy_view())
condition = (
np.stack((mask_data,) * image_shape[-1], axis=-1)
> confidence
)
mask_array = np.where(
condition, mask_foreground_array, mask_background_array
)
mask_arrays.append(mask_array)
I added mask_data = np.squeeze(mash.numpy_view()) and in the condition = (... i changed it to say np.stack((mask_data,)
This should solve the problem as it will remove the extra Alpha channel that this node is adding.
you will need to restart ComfyUI server for the changes to take effect.

