CivArchive
    Motion Model Experiments - v0.3anime
    NSFW

    EDIT: 9/30/23 The only limit is really is the consistency over time but that we think is solvable, but with enough controlnet refinement anything is pretty much possible. you just have to use controlnets to guide towards what you want, and loosen them as you go. I definitely recommend using this despite it being trained on some nsfw content, as you can see above, its very easy to prompt negative nsfw to avoid any of it, I'm just trying to be transparent on its origins. I'll still leave it marked for intended for mature themes.

    EXPANDING ON CONTROLNETS: you need them for the motion you see in my top examples, basically what the value is here is that its easy to maintain consistency in the reality around the characters, unlike base mm14, and even mm15v2

    A couple different Models I have attempted to train have led me to a couple results that people seem to be interested in, though I'd still say this is an unsolved science, but there seems to be a noticeable change in the way they interact with general motion but also the style of trained data. they aren't perfect, at least I don't think so, they are still trained on incredibly small datasets

    OH I SHOULD INCLUDE THIS, it doesn't have to be anime, its motion, you just choose what model you want, I just happen to use it for this model and lora, but its trained on actual porn so, its not just limited to this character.

    OH AND DOUBLE ALSO: the motion is really good because I am using animatediff-cli-travel-prompt with controlnets for assistance, whats notable about this model is the stability of the image for a longer duration than normal motion model v14. but if you put in the work I'm not sure what its limit is and what it isn't capable of doing.

    Maybe I'll make a tutorial for tomorrow to share. Support isn't required but appreciated :3

    Description

    This model is trained on a single cut up anime episode, another proof of concept just to see how attempting to finetune the model could improve the motion outputs.


    these are still just experimental and will still be someone chaotic in nature.

    I can't force myself to monetize as I still see room for improvement, but consider maybe a kofi to help my efforts :3

    FAQ

    Comments (36)

    FruityMelonSep 3, 2023· 2 reactions
    CivitAI

    Can you make a motion focus on melons?

    CubeyAI
    Author
    Sep 3, 2023

    the idea is that the models are flexible in some ways. its all in how you prompt it, and with rerolling you might get a good seed.

    sbirlSep 3, 2023

    @CubeyAI It always comes back to rerolling.

    CubeyAI
    Author
    Sep 3, 2023

    @sbirl better than living in a deterministic universe I suppose :p

    digitalghostsSep 3, 2023· 2 reactions
    CivitAI

    This is very exciting! Is there some kinda trick to get A1111 to see the model? I unchecked the 'check hash' box in settings.

    UncleJertSep 3, 2023· 1 reaction

    I could not get it to work at first. Keep check hash unchecked. Then I opted to rename these models to match the names of the default motion modules. In other words - motionModel_v01Nsfw renamed to mm_sd_v15. That appears to have worked. Careful not to lose track of which is which.

    CubeyAI
    Author
    Sep 3, 2023· 1 reaction

    @UncleJert yeah if you use them with the a1111 extension you will probably need to rename them, if you use them with regular diff you can just change the model it uses in the json config

    digitalghostsSep 3, 2023

    @UncleJert thank you - I appreciate your workaround 💪

    DazrockSep 5, 2023

    @CubeyAI May i ask what the extension is?

    borhanrahi123Sep 3, 2023· 13 reactions
    CivitAI

    Bloody Hell! you definitely need to share a small tuto

    olivettySep 13, 2023· 1 reaction

    Just copy to the AnimateDiff models folder and you are good to go! As for using AnimateDiff, there are hundreds of them on youtube and google. Also, if you need to know something specific, just ask here - we are here to help! :D <3

    TitoRedDiariesSep 6, 2023· 16 reactions
    CivitAI

    Waiting for the tutorial

    Agent_SmthOct 1, 2023
    CivitAI

    tutorial please

    hypnofanOct 6, 2023
    CivitAI

    did you ever make that tutorial? if so can we see it? and or have a link?

    kellneNov 14, 2023
    CivitAI

    Cant make it to work on website

    CubeyAI
    Author
    Nov 15, 2023· 1 reaction

    I didn't even know that you could use it on the website, that's news to me!

    kellneNov 15, 2023

    @CubeyAI it doesnt work. Always fails generation 😭

    CubeyAI
    Author
    Nov 15, 2023

    @kellne does the website allow you to run animatediff?

    kellneNov 15, 2023

    @CubeyAI sadly no. Would love to create Animations 💔

    CubeyAI
    Author
    Nov 15, 2023

    @kellne just get a 4090 :D

    kellneNov 16, 2023

    @CubeyAI PS5 is enough for me 💸

    Suluman45Nov 18, 2023

    @CubeyAI Huh

    FusionDraw9527Dec 21, 2023
    CivitAI

    Thank you for sharing. I would like to ask you something about this action model. What is the maximum number of frames that this action model can support and can handle?

    CubeyAI
    Author
    Dec 21, 2023

    with context sliding, you could do as much as you have patience for. the issue is as you see, the issue is still the consistency, which will hamper generations and is still being worked on.

    Dark_MessiahDec 24, 2023
    CivitAI

    Hey, can you expand a bit more on using controlnet? did you only use poses, or get entire animations? and if so, where?

    362400629783Dec 25, 2023
    CivitAI

    Excuse me, is this model used in 'animatediff' ???

    GK0Dec 27, 2023· 13 reactions
    CivitAI

    D:\Comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff\models
    Place it here
    D:\Comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\models

    morty_est_mortyDec 27, 2023
    CivitAI

    May I ask how large your training set was for finetuning? Just that one video?

    CubeyAI
    Author
    Dec 27, 2023

    for which model? the anime one was 100 clips from a single anime episode, the other model was I think about 4-8 videos

    morty_est_mortyDec 27, 2023

    @CubeyAI I saw your notes on the missionary one that the sample size was smaller, thanks! How long on avg are the clips?

    CubeyAI
    Author
    Dec 28, 2023

    @morty_est_morty 16 frames I think, or 24.

    savarebJan 20, 2024· 7 reactions
    CivitAI

    This is a good workflow that includes this ckpt if people are interested. It is very recent.
    https://www.reddit.com/r/comfyui/comments/198zf1y/creating_better_animation_with_automasking/

    blenderkrita888May 21, 2024· 1 reaction
    CivitAI

    i love,You are very good. genius

    half_realMay 24, 2024· 4 reactions
    CivitAI

    Please tag this as "motion", "animatediff", and "video" so that people can find it (not "base model", that is for SD checkpoints, not motion modules). I missed this months ago when testing motion modules for anime, and your anime motion module looks quite promising in my initial tests.

    black_jack_5223Jul 12, 2024
    CivitAI

    would you be open to sharing your code for training the motion model? I want to train one as well, but can't seem to find the right code for it. All I can find is for motion loras.

    brandiipvtltd304Jul 29, 2024· 2 reactions
    CivitAI

    hey can you make motion module with v3 and anime it would be more good and consistent

    Checkpoint
    SD 1.5

    Details

    Downloads
    2,206
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/3/2023
    Updated
    5/11/2026
    Deleted
    -

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.