CivArchive

    This is a Lora with enhanced walking posture for wan2.1/2.2, which can make the generated walking video more smooth and natural.

    This LoRa has been updated to Wan 2.2, and three versions will be released this time,Greatly optimized the problem of small steps in Wan 2.1 version:

    Lite: Training with fewer steps, avoids issues with holding a phone or causing wrinkles, but the walking posture isn't quite perfect.

    Slow: Training with more steps, occasionally causing issues with holding a phone, but the walking motion is more sensual. Videos generated at a 16-fps frame rate will appear in slow motion.

    Normal: Training with more steps, occasionally causing issues with holding a phone, but the walking motion is more sensual. Videos generated at a 16-fps frame rate are at normal speed.


    i2v version updated to v3.0

    Improved the issue of slow motion when characters walk, fixing it to a normal walking speed

    i2v version updated to v2.0

    This version has optimized the stability of walking, avoiding strange walking movements and adding some hand movements

    The i2v_v2.0 version also adds some hand movements, such as:

     adjusted her hair with hand

    Or

    hands on hips

    I tried using Wan2.1 to generate videos of characters walking behind their backs but was unsuccessful. There were always instances of people walking backwards or making strange movements. So I trained this Lora, which will improve the success rate of generating walking back videos.

    Trigger word:

    walking from behind

    Perhaps increasing the number of sampling steps can lead to improvement

    Description

    FAQ

    Comments (34)

    cheesetrianglesMar 19, 2025
    CivitAI

    Noticed a weird behavior with i2vv2 that I didn't see in the first version. Where a character with dark skin her skin was often treated as cloth by the AI instead of skin, like her skin was treated as a dark colored pair of pants or shirt.

    FX_FeiHou
    Author
    Mar 19, 2025

    Yes, because the V2 version is a little overfitting, and the videos in the training set are all wearing pants, you can try a lower weight

    ArtistYeeMar 20, 2025· 1 reaction

    LMAO, my bare ass got recognized as flesh-colored leggings.

    FX_FeiHou
    Author
    Mar 20, 2025

    Can you try adding prompts such as:naked , bare hips, no pants, to improve it?

    thefoodmageMar 19, 2025· 1 reaction
    CivitAI

    very good i have to use this on a lora of my dad as soon as its downloaded thanks OP!

    FX_FeiHou
    Author
    Mar 20, 2025· 3 reactions

    ???

    Maddoc412Mar 20, 2025
    CivitAI

    works perfectly - thx very much!

    EshinioMar 25, 2025
    CivitAI

    What strength does a Wan Lora need to have when putting it into my workflow? Should it be toned down to like 0.5 or just full 1.0?

    FX_FeiHou
    Author
    Mar 27, 2025

    0.8~1.0

    playtime_aiJun 2, 2025

    It depends on how many lora you are using at once. One lora? 0.8-1.0. Multiple lora? The more you use, the more you will have to tone town each lora strength. Experimentation will be necessary. Some lora are trained more than others. Some combine well with others at 1.0 but most on here are a bit over trained and will need to be turned down.

    934290311246Mar 27, 2025
    CivitAI

    能不能做一个向前走的

    FX_FeiHou
    Author
    Mar 27, 2025· 1 reaction

    往前走不需要lora就能很好生成啊

    crazybabyApr 9, 2025

    @ivever 很遗憾,如果不使用lora,我生成的走路视频很糟糕,向前向后走路都生成不了正常的行走视频,不是怪异就是抽搐诡异的,4060ti16gb,128gb内存,m2固态,用的是万能君那个版本,有可能用工作流就不会有这种问题吗,郁闷了很久了

    FX_FeiHou
    Author
    Apr 12, 2025

    向前走成功率还是比较高的,有6-7成的成功几率,万能君是什么?一个整合包吗?

    FX_FeiHou
    Author
    Apr 12, 2025

    @crazybaby 这些lora主要的作用是为了乳摇,不过也提高的走路的成功率

    crazybabyApr 19, 2025

    @ivever 万能君是b站一个大佬,他懂编程,他把工作流和模型等打包成一个易用的windows程序,他做了很多这样的程序而且免费,可自愿赞助他。走路问题我已解决,我用了comfyui工作流,且工作流比万能君版本更高清,不能正确走路的问题是:1,他那个版本存在问题,我已向他反馈,2,图生,视频(480p-itv)尺寸最佳为竖屏480x720,用此尺寸,在工作流上,即使不用lora,跑10次能有8次正确走路姿态,之前用的480x832,很糟糕,抽卡很难,3,在改图做图时,有时候要注意构图,虽然ai自动构图基本完美,但有些图透视错误会造成wan无法判断人和路面的透视结构,生成的视频会很怪异,抽搐等问题。

    FX_FeiHou
    Author
    Apr 23, 2025

    @crazybaby 我训练了一个向前走的lora,跟你发的那几个不同,那几个步态应该都是取材于时装T台走秀,我这个步态全部来自于“哈哈小太阳”:https://civitai.com/models/1500170/catwalk-wan-i2v-14b?modelVersionId=1697048

    crazybabyApr 26, 2025

    @ivever 已经看到了,已下载,改天测试一下,谢谢

    2592641Apr 2, 2025· 1 reaction
    CivitAI

    Hi, @ivever! Can you please share how have you trained this? With what software? Did you use images or videos, with which sizes and duration? Did you train for 720p i2v? How much vram was required for training, how much time?
    I'm asking because I want LoRAs for animated character walking cycle animations(front, right, back, front right, back right etc) to create sprite sheet assets for video game from one input image.

    FX_FeiHou
    Author
    Apr 4, 2025· 2 reactions

    The training script was downloaded from GitHub: sdbds/musubi-tuner-scripts, I downloaded some walking videos on TikTok and took about 20 video clips from them, each of which is 3 seconds long. It takes about 4-6 hours of training, and it seems that the 4090 graphics card's 24gVRam is not enough

    2592641Apr 4, 2025

    Got it, thanks!

    Kate_Wett770Apr 27, 2025

    @morozig I Train also Loras with Video Clips in 720p. On a mac Studio with m4max with 96GB RAM/vRam it takes around 40GB of vRam.

    darkroast175696Jun 2, 2025

    I am hoping to find someone here on civitai who has experience training a motion-focused wan lora using videos on civitai's lora trainer. I have tried a couple of times but haven't had any luck and I don't know if I have the wrong settings, not enough videos, etc. I have a few wan character models based on still images and they work very well but I can't seem to get the motion loras to work and I burn through buzz in a hurry with experimentation.
    Does anyone reading these comments have any experience or advice on this? Thanks!

    xXRogerXxJun 13, 2025

    @darkroast175696 I hope you've found a solution or workaround since I am in the same situation.

    darkroast175696Jun 13, 2025· 1 reaction

    @xXRogerXx Sadly I haven't made any progress. I did try an experiment to prove that the videos are really being used in the lora generation. I found a random tiktok dance video, cut it down to 512x512, and made 10 copies of it. I fed that into the generator to see what would come out. After several repeats, all the videos started to look like the tiktok video so that was proof that the videos ARE being used. That means it's down to user error (me) and I haven't figured out how to fix that yet.
    Is it my captions? Is it the number of repeats/steps/epochs? Other settings like learning rate and whatnot?
    It's a little frustrating to experiment on civitai, aside from the buzz cost, because of two limitations here. First you get a max of 20 epochs. So if you think you may need a bunch of steps, the only way to get it is to increase repeats which means the bouncy nature of the learning rate means you can't get an epoch for every repeat and you might miss the one or two epochs that are actually good.
    Second there isn't any data provided about the output. There is an outstanding developer request to add tensorboard log output to the training results which would help a LOT with testing, but I don't know if they're planning to do that at all or how soon.
    So for the time being I'm just doing stuff I can make from still images like characters and clothing and so on. If any of the motion creators out there feel like sharing some of their data set or training parameters, that would be really helpful.

    RemyMApr 7, 2025
    CivitAI

    can anyone tell me where i can find the missing custom node called PurgeVRAM ?? please thanks

    RemyMApr 14, 2025

    @ivever  thank you

    mdkbApr 28, 2025· 9 reactions
    CivitAI

    does this work for men walking or will they walk like woman?

    FX_FeiHou
    Author
    Apr 28, 2025

    Just walking like a woman

    mdkbApr 29, 2025· 3 reactions

    @ivever then my dude is about to become a lady

    mdkbApr 29, 2025

    actually set at 0.5 strenght it helped a lot. I had a room full of people walking away, and none would move with promptiong. using your lora they now walk away as requested. everyyone is walking away instead of no-one. I think setting it low strength enables the prompt to act on it.

    jemobyMay 16, 2025· 1 reaction

    I use for men between 0.35/0.5
    Beyond that, the walk is too... languid.
    Without this lora it is very difficult to make the characters move.