AnimateLCM-I2V
LoRAs and Motion weights for fast Image-to-Video generation.
We support high-quality image conditioned video generation in 4~8 steps.
Important Notes:
Use LCMScheduler for sampling.
CFG should be kept between 1 and 2. CFG is set to 1 by default.
LoRA weights is set to 0.8 by default.
Motion scale factor is set to 0.8 by default.
16 frames is preferred. Too long or too short videos tend to cause generation failure.
Image size greatly influences the generation quality.
Generation failure might occur due to the limited model size and training resources.
For more details, please refer to https://github.com/G-U-N/AnimateLCM
Links
Contact: [email protected]
Description
FAQ
Comments (5)
Interesting work! Where can I find the image to video workflow? Thanks
hello, i just put together a I2V workflow, please check it out here: https://civitai.com/models/774080
It's promising and better than t2v :)
Unfortunately, there's no way to test this one (workflow, HF space, github app.py)
hello, I just put together a workflow here: https://civitai.com/models/774080
I just put together a workflow for AnimateLCM I2V here: https://civitai.com/models/774080
Hope it helps anyone who wants to use AnimateDiff I2V.