Hi everyone!
I trained the following motion LORAs from drone footage.
Please take into consideration the following when using the LORAs:
These LORAs are trained on v2 motion model so they won't work with v3
However, v3 adapter LORA can be used (try with and without it)
They work with AnimateLCM motion model too (as it is based on v2)
Use aspect ratio: 3:2 e.g. (w*h) 576x384, 768x512...etc
The LORAs are trained on 16 frames so they might repeat on longer batches
If you want drone view, keyword "drone view", "drone shot", "drone footage" might help with the motion
If you want more motion try incrasing the scale multival (e.g. to 1.2-1.5)
For the back-view LORA, I recommended to use words like "back-view", "from behind" in your prompts
Description
Version trained with 3 input videos instead of one
FAQ
Comments (4)
hey, can you say what footage you trained these on?
Sure, I simply just filmed a water tower in my hometown with a drone. For the 3IV version I also used one which I recorded with my phone orbiting a glass of water :)
Hello, I really love it! Could I ask which tool you used for training the motion lora? And did you use an existing lora as starting point?
Hi, Thanks! I use these comfyui nodes to train motion loras: https://github.com/kijai/ComfyUI-ADMotionDirector
You can also find some example workflows there. There is no need to use existing mLoRAs as a base.