Edit 9-12-2025: Added the LoRAs as separate I2V High and Low for people using Runpod or using the online generator or something.
Baked a slop LoRA as a test a week ago on my 3090 while waiting for Bouncing Boobs to be updated to WAN 2.2 I2V. Trained sloppily on 7 of my old AI generated videos that were using WAN 2.1 Bounce LoRA using PGC's modified GUI of Kvento's Musubi Tuner GUI with a minor fix for WAN 2.2 I2V training that just made the --i2v toggle only apply to latent cache creation instead of training. Trained for 20 epochs but the uploaded epoch is number 5.
Videos were lazily manually tagged like
A white haired woman wearing a micro bikini bounces causing her breasts to bounce, shake, jiggle and sway.
So prompting breasts bounce or jiggle will generally work. Not prompting anything specific also sort of works. Prompting something like
her breasts bounce, shake, jiggle and swaywill work.
Training data includes the modified musubi_tuner_gui.py for PGC's GUI with the fixed toggle for properly caching latents for i2v as well as the .json and .toml files that contain the dataset settings and training settings. Also has the workflow images for the model page's videos although the videos should also have the Comfyui workflow included for once.
Description
Split download for people using Runpod or the online generator?