Originally shared on GitHub by guoyww
Learn about how to run this model to create animated images on GitHub.
Choose the version that aligns with the version your desired model was based on.
Description
FAQ
Comments (45)
Is there any way to force it to use your initial image as the first frame in img2img mode? I'm trying to animate existing images, and the only option is to let it redraw the first frame.
use control net in txt-to-image
If I use more than 75 tokens in a positive prompt, it will generate a different composition for every 16 frame.
How to fix it?
I'm running into this problem even with less than 75 tokens
@lavasplit753 You may be using styles whose tokens do not shown in the token total.
@YukiLaneige like LORAs?
@lavasplit753 I'm not sure about this, I haven't tested LORAs.
@YukiLaneige Yeah, for me it seems to be LORAs and Textual Inversions that bloat out the token count.
@lavasplit753 on the github page, it tells a1111 users to go to settings > optimizations > and check the box that says "pad prompt & Neg to be the same length". this helped me get it working..
stop using so many words lol
@PromptAddict "realistic ass, small ass, rounded shape ass, beautiful ass, detailed ass, masterpiece ass, closeup ass, focus ass, vivid ass, funny ass, aggressive ass, etc" Is that so much to ask?)
Okay, I just learned to use "negative prompt" adequately, so I always have exactly 75 tokens here and in "positive prompt".
RuntimeError: CUDA error: invalid configuration argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
how can i fix this error? sometimes work sometime i have this error and SOMEFKK time the render is a rainbow pixelated noise -_-
gtx1080 8giga all extension, ui etc work perfectly... only this one dosnt work ( allready tried to download directly from github)
using hi-res fix?, that will trigger it.
You're hitting a VRAM limit from the upscale ratio + # frames to generate.
@UnknownUsers This happens even if use --lowvram and push to cpu etc. There is something preventing it from working. I'm able to run the same concept with mov2mov and img2img with controlnet batch.... no problem... no matter what I do with this extension it either locks or runtime error.
if you use AnimateDiff as an extension of A1111. change the settings to "Optimize attention layers with sdp" instead of "Optimize attention layers with xformers" in the settings tab of A1111
I am having this same issue. Anyone figure out how to fix it?
Can anyone share the comfy ui workflow with QR monster and control net
I am getting the animation running forwards, and then in reverse - how do I stop it paying in reverse?
everytime when I need generate animation, have to restart webui again. It only can generate 1 animation once restart. otherwise It will report error, did someone know how to fix it
update automatic 1111 including pip modules and extensions, and disable any unneeded extensions, run at 256x256, etc
@terbo yes, I've noticed that there's no RAM left after generating a larger size, so it can not generate again, close the webui-user.bat to release memory, seems Animatediff can not release memory completly
@terbo that doesn't solve a memory leak
@PromptAddict I have a 12gb card, when I need to generate 512 x 768 animation, the method works for me. close then reopen, but I still hope animatediff can free memory itself
How are you guys getting these errors? Im in auto1111 and never encountered them. Never had to restart the UI. And in on 3060 12gb vram
@jasonrat504529 what resolution of animation do you generate
@efastcurex the standard 512x768. If I go over that, it just gives me static noise. You should try Comfyui if auto1111 is giving u errors as it can generate larger than 512x768. I even did 1024 x 1024 in Comfyui.
I think it might be extensions that conflict with animatediff in auto1111 because I uninstalled a lot of them before I installed animatediff.
I found that if you can only use ADetailer for one run then you have to restart, so make sure you don't have it enabled and use it when you upscale after
@nexus5teo273 Yes, it's the reason. but the problem is regenerate time is much longer than restart time
@jasonrat504529 Does Comfyui hard to learn
Wow, I can't even get a single animation to work. Freshly updated. It's just a grid of different images and nothing else.
@easyfruit5000 same
@easyfruit5000 probably you use a sdxl checkpoint... thets the results
@easyfruit5000 Try setting context batch size to default value 16 or higher, or use more stable motion module : https://huggingface.co/manshoety/AD_Stabilized_Motion/tree/main
This works with Auto1111 using the --disable-safe-unpickle in command line. Continue-Revolution mentioned this in the extension update. Make sure your batch size matches the fps, and the fps divides evenly into the amount of frames your rendering. If you use prompt traveling, your final prompt frame should be a value less then the amount frames your rendering. Use the new feature Nvidia provided in the the drivers update: Cuda prefer system fallback in control panel. Lastly make sure Cuda and CudNN are installed correctly. This will also use a lot of system ram so keep that in mind...Hope this helps!
你好这个模型可以搭配sdxl使用吗
I just write a tutorial on animatediff, record my experience
Can anyone guide me how to install this. I mean this file. Everything that's written on Github went over my head
Inside stable-diffusion-webui you should have animatediff installed, inside this folder there is a "model" folder. copy this model into that folder
@yakadaya Isn't there a way to install this via Safetensor like copy paste link? I use Sage Maker Studio Lab for AUTOMATIC1111
Copy paste model into your Model Folder inside of Comfyui/StableDiffusion
you should install comfy ui then you can simply drag and drop an image that contains the workflow into comfy directly, then go to the node manager and install missing nodes. as someone who recently swapped to comfyui after using automatic for the last 18 months, i say dont bother with automatic anymore.
Details
Files
animatediffMotion_v15V2.ckpt
Mirrors
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
Anime_motionmodel.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
animatediffMotion_v15V2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
pytorch_model.bin
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
animatediffMotion_v15V2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt
mm_sd_v15_v2.ckpt