Showing how to do video to video in comfyui and keeping a consistent face at the end.
We keep the motion of the original video by using controlnet depth and open pose.
We use animatediff to keep the animation stable.
Finally ReActor and face upscaler to keep the face that we want. Optionally we also apply IPAdaptor during the generation to help keep the face closer to what we want even before the swap. (Note: IPAdaptor can greatly slow down the generation depending on your machine)
Description
FAQ
Comments (12)
Hello, Would you happen to know why im gettin the list index out of range when trying to run the reactorswap node?
Hmm, is your ReActor node up to date? Additionally, it could be that either there's zero or more than one face.
If you look at the ReActor you should see a source_faces_index & input_faces_index. This tells the nodes which face in the image to swap. I wonder if you picked an index out of range, it's because it could not detect a face in one or more of your images.
Your approach and the combination of depth and open pose is exactly how I would do it. Respect for your workflow. After reinstalling and searching for the missing components, I now have the following problem, which I cannot solve.
"Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]). "
Do you know where I made the mistake in thinking? I can't find any setting option that could cause this in your Workflow. Thanks.
Hmm, check in the Load IPAdapter Model for what model you have there. I'm using a SD 1.5 base model, and so I'm using the one for that. If you are using SDXL, you'll need the other model. You can also try updating your comfyui as sometimes a new update might have broken something
I found this thread https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/108#issuecomment-1848423757
And it worked when I used this model:
https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors
I couldn't help but notice that the swap model is retinaface_resnet50. Isn't that just a detection model?
i got error message about reactor and ipadpter, i have install that custom nodes but cannot work on me
This workflow use the old IPAdapter.
You could replace the node with the IPAdapter Advanced node from this :
https://github.com/cubiq/ComfyUI_IPAdapter_plus
@Catz algum exemplo por favor?
@spaycker Oufff this is some old tech now. Try to use Wan Vace now. The IPAdapter Advanced is a specific node, just need to delete the IPAdapter node and add the (Advanced) version and reconnect the connections to input/outputs.
I am having the following issue: Error occurred when executing ImpactKSamplerBasicPipe: mat1 and mat2 shapes cannot be multiplied (1232x2048 and 768x320)
Any help is appreciated!
Most likely using a model that’s not compatible. Happens to me all the time.
