This workflow use Wan Animate2.2 model, lets you replace a person in a video with someone else while referencing fine-grained facial expression details. Even in a two-person clip, you can choose to swap only one subject.
Fed up?
Models too big filling up your storage, nodes a nightmare to install with constant errors, GPU RAM always running out and crashing halfway? RunningHub solves it all! With RTX 4090 24G VRAM in the cloud, all models and nodes pre-integrated, just sign up with Gmail to get 1000 free credits and run your workflow instantly — smooth, fast, and hassle-free!
Workflow Link:https://www.runninghub.ai/workflow/1972942209085476866?inviteCode=rh-v1153
Description
FAQ
Comments (11)
A annoying thing is 4090 everytime 1 output will over 70 degrees....
and the nsfw motion can`t perfect restoration
Indeed, GPU usage is still quite heavy. The base model of Wan Animate 2.2 is essentially Wan 2.1, so certain motions still don’t look as smooth or ideal as expected. Another limitation of the Wan models is their handling of NSFW content, which remains a weak spot. It might be worth experimenting with some NSFW-focused LoRAs to see if they can improve the results.
@VictorJWK dunno someone have test these loras..
What happened to the good 'ol days when we used to get an image of the workflow and multiple samples. I miss those days.
You’re absolutely right, that’s my oversight. The workflow part will probably still need to be provided as a JSON file, but I should definitely be able to share more sample outputs. Thanks for the suggestion!
Where can I get vitpose_h_wholebody_model? The one I have isn't working properly
literally googling it will get you your answer
Please refer to the node author’s website: https://github.com/kijai/ComfyUI-WanAnimatePreprocess?tab=readme-ov-file
is this the one those chinese creator to inpaint a 3minute long nudify kpop dance video?
How can I switch between the subjects in a two-person clip? It takes always the first.