This is an experimental image to image LORA for Wan. I have also added a workflow because I probably spent 12h+ simply finding the best parameters to make it usable. The expression changer works well, and direction top, bottom, forward work fine, but unfortunately the left and right direction are not very consistent so might as well treat them as random. At least the generations are very fast because we are only generating 5 frames.
The workflow and prompt structure are present in the Zip file.
If you're feeling extra brave, you may try changing the parameters and see if you can make it work better.
These are the actual training prompts:
"The {man|woman} is standing in front of a green screen, {he|she} looks {forward|up|down|to the right_30|to the right_60|to the left_30|to the left_60} with a {barely|regular|very} {angry|contempt|disgusted|fear|happy|neutral|sad|surprised} emotion"
If you go this route, take into account left and right directions were inverted during training.
Description
FAQ
Comments (11)
Great author, you are too strong, hope to keep this series updated, with you this LoRa, no longer rely on Google Gmini and 4O
you're too nice haha, I am no match for that
works like a charm, thanks
thanks for the tip and comment :)
thanks for the tip again!! can you / wanna share any of your generations? :D
@Juampab12 just shared a video , Your Lora can make animals laugh, which is hard to do without Lora.
@huanggou Thank you! unfortunately I can't see the generated video anywhere, not even inside your profile
This seems pretty cool. I'm not sure how to get it working with native nodes though, if anyone could help me out or you could provide a more native workflow.
Thank you, it should be identical to generating an image to video but setting num_frames to 5 and applying this lora. The only other thing I do is extract the last frame from that generated video and setting the prompt and the parameters in my workflow
Thanks for the response, I think I tried that and I was getting random results, but I'll give it another shot, maybe I missed something--or maybe my resolution was too low or something. Thanks for clarifying, I'll let you know how it goes when I get a chance.
Welp, I'm an idiot, I forgot to switch my model from text2vid to img2vid. Works well, as you said, with native/gguf setup.
Details
Files
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.






