The goal of this lora is to reproduce the video style similar to live wallpaper, for those who play league of legends remember the launcher opening videos, that's the goal, but you can also use it to create your lofi videos :D enjoy.
[Wan2.2 TI2V 5B - Motion Optimized Edition] Trained on 51 curated videos (24fps, 96 frames) for 5,000 steps across 100 epochs with rank 48. Optimized specifically for Wan2.2's unified TI2V 5B dense model and high-compression VAE.
My Workflow (It's not organized, the important thing is that it works hahaha): 🎮 Live Wallpaper LoRA - Wan2.2 5B (Workflow) | Patreon
Loop Workflow: WAN 2.2 5b WhiteRabbit InterpLoop - v1.0 - Hardline | Wan Video Workflows | Civitai
Trigger word: l1v3w4llp4p3r
[Wan2.2 I2V A14B - Full Timestep Edition]
Trained on 301 curated videos (256px, 16fps, 49 frames) for 24 hours using Diffusion Pipe with Automagic optimizer, rank 64. Uses extended timestep range (0-1) instead of standard (0-0.875), enabling compatibility with both Low and High models despite training only on Low model.
Trigger word: l1v3w4llp4p3r
Works excellently with LightX2V v2 (256 rank) for faster inference
[Wan I2V 720P Fast Fusion - 4 (or more) steps]
Wan I2V 720P Fast Fusion combines 2 Live Wallpaper LoRA (1 Exclusive) with Lightx2v, AccVid, MoviiGen and Pusa LoRAs for ultra-fast 4+ steps generation while maintaining cinematic quality.
🚀 Lightx2v LoRA – accelerates generation by 20x through 4-step distillation, enabling sub 2-minute videos on RTX 4090 with only 8GB VRAM requirements.
🎬 AccVid LoRA – improves motion accuracy and dynamics for expressive sequences.
🌌 MoviiGen LoRA – adds cinematic depth and flow to animation, enhancing visual storytelling.
🧠 Pusa LoRA – provides fine-grained temporal control with zero-shot multi-task capabilities (start-end frames, video extension) while achieving 87.32% VBench score.
🧠 Wan I2V 720p (14B) base model – providing strong temporal consistency and high-resolution outputs for expressive video scenes.
[Wan I2V 720P]
The dataset used consists of 149 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 480p and 64 frames with 64 dim (L40s).
Trigger word was used so it needs to be included in the prompt: l1v3w4llp4p3r
[Hunyuan T2V]
The dataset used consists of 529 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 72 frames with 64 dim (multiple RTX 4090).
No captions or activation words were used, the only control you will need to adjust is the lora strength.
Another important note is that it was trained in full blocks, I don't know how it will behave when mixing 2 or more loras, if you want to mix and are not getting a good result, try disabling single blocks.
I recommend using lora strength between 0.2 and 1.2 maximum, resolution 1280x720 or generate at 512 and upscale later, minimum 3 seconds (72 frames + 1).
[LTXV I2V 13b 0.9.7 – Experimental v1]
The model was trained on 140 curated videos (512px, 24fps, 49 frames), using 250 epochs, 32 dim, and AdamW8bit.
It was trained using Diffusion Pipe with support for LTXV I2V v0.9.7 (13B).
Captions were used and generated with Qwen2.5-VL-7B via a structured prompt format.
This is an experimental first version, so expect some variability depending on seed and prompt detail.
Recommended:
Scheduler: sgm_uniform
Sampler: euler
Steps: 30
⚠️ Long prompts are highly recommended to avoid motion artifacts.
You can generate captions using the Ollama Describer or optionally use the official LTXV Prompt Enhancer.
For more details, see the About this version tab.
------------------------------------------------------------------------------------------------------
For more details see the version description
Share your results.
Description
This LoRA was trained with a dataset with 149 videos, 64 frames, using the Wan 720 I2V 14B model. To perform the training, the diffusion-pipe trainer was used on a 48GB L40S video card with an LR: 5e-5, resolutions: [244, 480], rank: 64, optimizer: adamw8bit.
There were 3 days of training testing various combinations, with caption, without caption, with trigger, in the end I had trained for 50 epochs during 2 days a version without caption in the I2V 480P model but I thought it could improve so I took the lora of epoch 50 and started training from it in the I2V 720P model, a very laborious process (testing process) but I think that now the results are satisfactory, there is a lot to improve but it works.
Use the 720P model (1280x720 or 720x1280) for better quality.
Trigger Word: l1v3w4llp4p3r [your description]
Note: If the generated videos are too static, try adding to the prompt: "fast motion, fast movements... more motion" or decreasing the strength of the lora, The higher the strength, the slower or more static the video tends to be.
Apparently this LoRA works on the 480P model, I don't know now if it's because I trained a LoRA for 50 epochs on the 480P model and then used this LoRA as a basis to train the LoRA of the 720P model, do your tests.
This LoRA is not perfect. If your results are generating a lot of artifacts, try reducing the LoRA strength to 0.6, 0.8 or reducing the prompt size. Don't use very large prompts. I usually use prompts with a maximum of 200 characters. Do your own testing.
Post your results and get some buzz.