EDIT: 9/30/23 The only limit is really is the consistency over time but that we think is solvable, but with enough controlnet refinement anything is pretty much possible. you just have to use controlnets to guide towards what you want, and loosen them as you go. I definitely recommend using this despite it being trained on some nsfw content, as you can see above, its very easy to prompt negative nsfw to avoid any of it, I'm just trying to be transparent on its origins. I'll still leave it marked for intended for mature themes.
EXPANDING ON CONTROLNETS: you need them for the motion you see in my top examples, basically what the value is here is that its easy to maintain consistency in the reality around the characters, unlike base mm14, and even mm15v2
A couple different Models I have attempted to train have led me to a couple results that people seem to be interested in, though I'd still say this is an unsolved science, but there seems to be a noticeable change in the way they interact with general motion but also the style of trained data. they aren't perfect, at least I don't think so, they are still trained on incredibly small datasets
OH I SHOULD INCLUDE THIS, it doesn't have to be anime, its motion, you just choose what model you want, I just happen to use it for this model and lora, but its trained on actual porn so, its not just limited to this character.
OH AND DOUBLE ALSO: the motion is really good because I am using animatediff-cli-travel-prompt with controlnets for assistance, whats notable about this model is the stability of the image for a longer duration than normal motion model v14. but if you put in the work I'm not sure what its limit is and what it isn't capable of doing.
Maybe I'll make a tutorial for tomorrow to share. Support isn't required but appreciated :3
Description
I'M CHANGING THE NAME FOR A REASON:
basically, this model is just really good in general, I definitely like using this model most of the time and just shows a better ability to be animate in general. What I'm trying to say is while it is training on nsfw material, the motion model doesn't have that much power over the normal tokens of stable diffusion, so if you simply put nsfw in the negatives, and are using controlnets you're highly unlikely to get nsfw out of it. its pretty safe to use and with another patience, you can chisel out anything you want.
Consider sponsoring it, I'd like to devote all of my time to developing this tech and I could really use some support :D
as stated before, try them out, tell me what you think, this was trained on IRL nsfw videos.
This one is not like the anime model, I'm just putting all most intriguing models into one post so its not spammed.
FAQ
Comments (36)
Can you make a motion focus on melons?
the idea is that the models are flexible in some ways. its all in how you prompt it, and with rerolling you might get a good seed.
@CubeyAI It always comes back to rerolling.
@sbirl better than living in a deterministic universe I suppose :p
This is very exciting! Is there some kinda trick to get A1111 to see the model? I unchecked the 'check hash' box in settings.
I could not get it to work at first. Keep check hash unchecked. Then I opted to rename these models to match the names of the default motion modules. In other words - motionModel_v01Nsfw renamed to mm_sd_v15. That appears to have worked. Careful not to lose track of which is which.
@UncleJert yeah if you use them with the a1111 extension you will probably need to rename them, if you use them with regular diff you can just change the model it uses in the json config
@UncleJert thank you - I appreciate your workaround 💪
@CubeyAI May i ask what the extension is?
Bloody Hell! you definitely need to share a small tuto
Just copy to the AnimateDiff models folder and you are good to go! As for using AnimateDiff, there are hundreds of them on youtube and google. Also, if you need to know something specific, just ask here - we are here to help! :D <3
Waiting for the tutorial
tutorial please
did you ever make that tutorial? if so can we see it? and or have a link?
Cant make it to work on website
I didn't even know that you could use it on the website, that's news to me!
@CubeyAI it doesnt work. Always fails generation 😭
@kellne does the website allow you to run animatediff?
@CubeyAI sadly no. Would love to create Animations 💔
@kellne just get a 4090 :D
@CubeyAI PS5 is enough for me 💸
@CubeyAI Huh
Thank you for sharing. I would like to ask you something about this action model. What is the maximum number of frames that this action model can support and can handle?
with context sliding, you could do as much as you have patience for. the issue is as you see, the issue is still the consistency, which will hamper generations and is still being worked on.
Hey, can you expand a bit more on using controlnet? did you only use poses, or get entire animations? and if so, where?
Excuse me, is this model used in 'animatediff' ???
D:\Comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\models
Place it here
D:\Comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\models
May I ask how large your training set was for finetuning? Just that one video?
for which model? the anime one was 100 clips from a single anime episode, the other model was I think about 4-8 videos
@CubeyAI I saw your notes on the missionary one that the sample size was smaller, thanks! How long on avg are the clips?
@morty_est_morty 16 frames I think, or 24.
This is a good workflow that includes this ckpt if people are interested. It is very recent.
https://www.reddit.com/r/comfyui/comments/198zf1y/creating_better_animation_with_automasking/
i love,You are very good. genius
Please tag this as "motion", "animatediff", and "video" so that people can find it (not "base model", that is for SD checkpoints, not motion modules). I missed this months ago when testing motion modules for anime, and your anime motion module looks quite promising in my initial tests.
would you be open to sharing your code for training the motion model? I want to train one as well, but can't seem to find the right code for it. All I can find is for motion loras.
hey can you make motion module with v3 and anime it would be more good and consistent