๐ Run this Workflow on Cloud (Avoid OOM Errors!):
๐ [https://www.runninghub.ai/post/2043861564398247937/?inviteCode=rh-v1497]
Motion transfer is extremely VRAM-heavy. I highly recommend running this on Runninghub to avoid your local PC crashing: (Register now for 1000 FREE credits and start cloning movements today!)
๐บ Watch the Motion Test Video:
Check out my side-by-side dance test and setup guide:
๐ [
]
Workflow Overview:
The "Speed King" of motion cloning! Built on LTX 2.3, this workflow allows you to project any character's look onto a specific motion (like dancing) significantly faster than other models.
Key Features:
Double ControlNet Protection: Uses both Depth and DW-Pose controllers to keep the motion consistent and jitter-free.
Entertainment Optimized: Perfect for "Character-Swap" challenges and viral social media trends.
Description
FAQ
Comments (12)
Could you possibly share a link to the original video you copied the motion transfer from as a way to help test it when used?
reference image character face and body changed, Any fix ?
it could be better with lip sync and upscale
nice workflow. thank you.
if the pose skeletons are overlayed and shown in final result, what value should i change?
Thanks a lot! 300 frames for one shot! Nice!
Dude, what to do with this? If the frame shakes or the video zooms in, the video's consistency is compromised. Sometimes even the human anatomy breaks down.
Thank you very much for sharing. It works very well. Where do we change the number of steps for the LTX model?
RuntimeError: mat1 and mat2 shapes cannot be multiplied (7260x4096 and 2048x4096)
The aspect ratio of images and videos must be divisible by 16.
The video rapidly faded to white, eventually becoming a black and white video.
"First of all, thank you so much for this workflow! I'm really enjoying using it. I have one quick request: I recall seeing ControlNet being used to apply facial expressions as well. Could you please update the workflow to include facial expression support? Thank you!"