This is a Lora with enhanced walking posture for wan2.1/2.2, which can make the generated walking video more smooth and natural.
This LoRa has been updated to Wan 2.2, and three versions will be released this time,Greatly optimized the problem of small steps in Wan 2.1 version:
Lite: Training with fewer steps, avoids issues with holding a phone or causing wrinkles, but the walking posture isn't quite perfect.
Slow: Training with more steps, occasionally causing issues with holding a phone, but the walking motion is more sensual. Videos generated at a 16-fps frame rate will appear in slow motion.
Normal: Training with more steps, occasionally causing issues with holding a phone, but the walking motion is more sensual. Videos generated at a 16-fps frame rate are at normal speed.
i2v version updated to v3.0
Improved the issue of slow motion when characters walk, fixing it to a normal walking speed
i2v version updated to v2.0
This version has optimized the stability of walking, avoiding strange walking movements and adding some hand movements
The i2v_v2.0 version also adds some hand movements, such as:
adjusted her hair with handOr
hands on hipsI tried using Wan2.1 to generate videos of characters walking behind their backs but was unsuccessful. There were always instances of people walking backwards or making strange movements. So I trained this Lora, which will improve the success rate of generating walking back videos.
Trigger word:
walking from behindPerhaps increasing the number of sampling steps can lead to improvement
Description
FAQ
Comments (34)
Noticed a weird behavior with i2vv2 that I didn't see in the first version. Where a character with dark skin her skin was often treated as cloth by the AI instead of skin, like her skin was treated as a dark colored pair of pants or shirt.
Yes, because the V2 version is a little overfitting, and the videos in the training set are all wearing pants, you can try a lower weight
LMAO, my bare ass got recognized as flesh-colored leggings.
Can you try adding prompts such as:naked , bare hips, no pants, to improve it?
very good i have to use this on a lora of my dad as soon as its downloaded thanks OP!
???
works perfectly - thx very much!
What strength does a Wan Lora need to have when putting it into my workflow? Should it be toned down to like 0.5 or just full 1.0?
0.8~1.0
It depends on how many lora you are using at once. One lora? 0.8-1.0. Multiple lora? The more you use, the more you will have to tone town each lora strength. Experimentation will be necessary. Some lora are trained more than others. Some combine well with others at 1.0 but most on here are a bit over trained and will need to be turned down.
能不能做一个向前走的
往前走不需要lora就能很好生成啊
@ivever 很遗憾,如果不使用lora,我生成的走路视频很糟糕,向前向后走路都生成不了正常的行走视频,不是怪异就是抽搐诡异的,4060ti16gb,128gb内存,m2固态,用的是万能君那个版本,有可能用工作流就不会有这种问题吗,郁闷了很久了
向前走成功率还是比较高的,有6-7成的成功几率,万能君是什么?一个整合包吗?
@crazybaby 这些lora主要的作用是为了乳摇,不过也提高的走路的成功率
@ivever 万能君是b站一个大佬,他懂编程,他把工作流和模型等打包成一个易用的windows程序,他做了很多这样的程序而且免费,可自愿赞助他。走路问题我已解决,我用了comfyui工作流,且工作流比万能君版本更高清,不能正确走路的问题是:1,他那个版本存在问题,我已向他反馈,2,图生,视频(480p-itv)尺寸最佳为竖屏480x720,用此尺寸,在工作流上,即使不用lora,跑10次能有8次正确走路姿态,之前用的480x832,很糟糕,抽卡很难,3,在改图做图时,有时候要注意构图,虽然ai自动构图基本完美,但有些图透视错误会造成wan无法判断人和路面的透视结构,生成的视频会很怪异,抽搐等问题。
@crazybaby 我训练了一个向前走的lora,跟你发的那几个不同,那几个步态应该都是取材于时装T台走秀,我这个步态全部来自于“哈哈小太阳”:https://civitai.com/models/1500170/catwalk-wan-i2v-14b?modelVersionId=1697048
@ivever 已经看到了,已下载,改天测试一下,谢谢
Hi, @ivever! Can you please share how have you trained this? With what software? Did you use images or videos, with which sizes and duration? Did you train for 720p i2v? How much vram was required for training, how much time?
I'm asking because I want LoRAs for animated character walking cycle animations(front, right, back, front right, back right etc) to create sprite sheet assets for video game from one input image.
The training script was downloaded from GitHub: sdbds/musubi-tuner-scripts, I downloaded some walking videos on TikTok and took about 20 video clips from them, each of which is 3 seconds long. It takes about 4-6 hours of training, and it seems that the 4090 graphics card's 24gVRam is not enough
Got it, thanks!
@morozig I Train also Loras with Video Clips in 720p. On a mac Studio with m4max with 96GB RAM/vRam it takes around 40GB of vRam.
I am hoping to find someone here on civitai who has experience training a motion-focused wan lora using videos on civitai's lora trainer. I have tried a couple of times but haven't had any luck and I don't know if I have the wrong settings, not enough videos, etc. I have a few wan character models based on still images and they work very well but I can't seem to get the motion loras to work and I burn through buzz in a hurry with experimentation.
Does anyone reading these comments have any experience or advice on this? Thanks!
@darkroast175696 I hope you've found a solution or workaround since I am in the same situation.
@xXRogerXx Sadly I haven't made any progress. I did try an experiment to prove that the videos are really being used in the lora generation. I found a random tiktok dance video, cut it down to 512x512, and made 10 copies of it. I fed that into the generator to see what would come out. After several repeats, all the videos started to look like the tiktok video so that was proof that the videos ARE being used. That means it's down to user error (me) and I haven't figured out how to fix that yet.
Is it my captions? Is it the number of repeats/steps/epochs? Other settings like learning rate and whatnot?
It's a little frustrating to experiment on civitai, aside from the buzz cost, because of two limitations here. First you get a max of 20 epochs. So if you think you may need a bunch of steps, the only way to get it is to increase repeats which means the bouncy nature of the learning rate means you can't get an epoch for every repeat and you might miss the one or two epochs that are actually good.
Second there isn't any data provided about the output. There is an outstanding developer request to add tensorboard log output to the training results which would help a LOT with testing, but I don't know if they're planning to do that at all or how soon.
So for the time being I'm just doing stuff I can make from still images like characters and clothing and so on. If any of the motion creators out there feel like sharing some of their data set or training parameters, that would be really helpful.
can anyone tell me where i can find the missing custom node called PurgeVRAM ?? please thanks
@ivever thank you
does this work for men walking or will they walk like woman?
Just walking like a woman
@ivever then my dude is about to become a lady
actually set at 0.5 strenght it helped a lot. I had a room full of people walking away, and none would move with promptiong. using your lora they now walk away as requested. everyyone is walking away instead of no-one. I think setting it low strength enables the prompt to act on it.
I use for men between 0.35/0.5
Beyond that, the walk is too... languid.
Without this lora it is very difficult to make the characters move.
Details
Files
P002-The-Walking-Back-i2v-v20-000010_converted.safetensors
Mirrors
P002-The-Walking-Back-i2v-v20-000010_converted.safetensors
B64_UDAwMi1UaGUtV2Fsa2luZy1CYWNrLWkydi12MjAtMDAwMDEwX2NvbnZlcnRlZA.safetensors
107_P002-The-Walking-Back.safetensors
16_P002-The-Walking-Back.safetensors
129_P002-The-Walking-Back.safetensors
P002-The-Walking-Back-i2v-v20-000010_converted.safetensors
P002-The-Walking-Back-i2v-v20-000010_converted.safetensors
P002-The-Walking-Back-i2v-v20-000010_converted.safetensors
P002-The-Walking-Back-i2v-v20-000010_converted.safetensors