CivArchive

    The goal of this lora is to reproduce the video style similar to live wallpaper, for those who play league of legends remember the launcher opening videos, that's the goal, but you can also use it to create your lofi videos :D enjoy.

    [Wan2.2 TI2V 5B - Motion Optimized Edition] Trained on 51 curated videos (24fps, 96 frames) for 5,000 steps across 100 epochs with rank 48. Optimized specifically for Wan2.2's unified TI2V 5B dense model and high-compression VAE.

    My Workflow (It's not organized, the important thing is that it works hahaha): 🎮 Live Wallpaper LoRA - Wan2.2 5B (Workflow) | Patreon


    Loop Workflow: WAN 2.2 5b WhiteRabbit InterpLoop - v1.0 - Hardline | Wan Video Workflows | Civitai

    Trigger word: l1v3w4llp4p3r


    [Wan2.2 I2V A14B - Full Timestep Edition]

    Trained on 301 curated videos (256px, 16fps, 49 frames) for 24 hours using Diffusion Pipe with Automagic optimizer, rank 64. Uses extended timestep range (0-1) instead of standard (0-0.875), enabling compatibility with both Low and High models despite training only on Low model.

    Trigger word: l1v3w4llp4p3r

    Works excellently with LightX2V v2 (256 rank) for faster inference

    [Wan I2V 720P Fast Fusion - 4 (or more) steps]

    Wan I2V 720P Fast Fusion combines 2 Live Wallpaper LoRA (1 Exclusive) with Lightx2v, AccVid, MoviiGen and Pusa LoRAs for ultra-fast 4+ steps generation while maintaining cinematic quality.

    🚀 Lightx2v LoRA – accelerates generation by 20x through 4-step distillation, enabling sub 2-minute videos on RTX 4090 with only 8GB VRAM requirements.
    🎬 AccVid LoRA – improves motion accuracy and dynamics for expressive sequences.
    🌌 MoviiGen LoRA – adds cinematic depth and flow to animation, enhancing visual storytelling.
    🧠 Pusa LoRA – provides fine-grained temporal control with zero-shot multi-task capabilities (start-end frames, video extension) while achieving 87.32% VBench score.
    🧠 Wan I2V 720p (14B) base model – providing strong temporal consistency and high-resolution outputs for expressive video scenes.

    [Wan I2V 720P]

    The dataset used consists of 149 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 480p and 64 frames with 64 dim (L40s).

    Trigger word was used so it needs to be included in the prompt: l1v3w4llp4p3r

    [Hunyuan T2V]

    The dataset used consists of 529 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 72 frames with 64 dim (multiple RTX 4090).

    No captions or activation words were used, the only control you will need to adjust is the lora strength.

    Another important note is that it was trained in full blocks, I don't know how it will behave when mixing 2 or more loras, if you want to mix and are not getting a good result, try disabling single blocks.

    I recommend using lora strength between 0.2 and 1.2 maximum, resolution 1280x720 or generate at 512 and upscale later, minimum 3 seconds (72 frames + 1).


    [LTXV I2V 13b 0.9.7 – Experimental v1]

    The model was trained on 140 curated videos (512px, 24fps, 49 frames), using 250 epochs, 32 dim, and AdamW8bit.
    It was trained using Diffusion Pipe with support for LTXV I2V v0.9.7 (13B).
    Captions were used and generated with Qwen2.5-VL-7B via a structured prompt format.

    This is an experimental first version, so expect some variability depending on seed and prompt detail.

    Recommended:

    Scheduler: sgm_uniform

    Sampler: euler

    Steps: 30

    You can generate captions using the Ollama Describer or optionally use the official LTXV Prompt Enhancer.

    For more details, see the About this version tab.
    ------------------------------------------------------------------------------------------------------

    For more details see the version description

    Share your results.

    Description

    🧪 Overview

    This is an experimental LoRA trained for the LTXV I2V (Image-to-Video) model, version 0.9.7, with 13B parameters. It transforms static images into fluid, seamless animated loops, with natural motion applied only to flexible or dynamic elements — like hair, clothing, particles, and ambient light — while preserving rigid structure stability (e.g., armor, weapons, mechanical parts).

    This is the first version of this LoRA, and results may vary depending on the prompt quality and seed. Better versions may be released in the future as training techniques are refined.

    My Workflow

    ⚙️ Training Details

    • 🧠 Base Model: LTXV I2V v0.9.7 (13B parameters)

    • 🎞️ Video Dataset: 140 short clips

    • ⏱️ Frame Rate: 24 fps

    • 🧮 Frames per Video: 49

    • 🖼️ Resolution: 512px

    • 🔁 Epochs: 250

    • 🧮 Total Training Steps: ~35,000

    • 📉 Learning Rate: 1e-4

    • 📦 Batch Size: 1

    • 📐 LoRA Dimension: 32

    • ⚙️ Optimizer: AdamW8bit

    • 🛠️ Trainer Used: Diffusion Pipe (by tdrussell)

    • 🚫 Not using official trainer: LTX-Video-Trainer (by Lightricks)

    • Layer Coverage:

      • When trained using Diffusion Pipe, all layers were updated during LoRA training.

      • In contrast, the official trainer from LightTricks (LTX-Video-Trainer) by default only updates attention layers (e.g., to_k, to_q, to_v, to_out.0), making it possible to use higher dim (e.g., 128) while still keeping the file size low (~700MB).

    • Initial Loss: High — LTXV I2V is known to require many steps before reaching stability

    ⚠️ The I2V 13B model begins with a very high initial loss, and convergence is slow — requiring many steps to stabilize below 0.1. Training this architecture is not plug-and-play and takes persistence.

    ⚠️ Prompting Recommendations

    This LoRA is very sensitive to prompt quality and seed variation.

    Using short or unclear prompts often causes:

    • Rigid elements like weapons or chairs to appear soft or rubbery

    • Unintended motion of static parts (e.g., armor bending, background flickering)

    These artifacts are not due to the LoRA itself but rather to a lack of motion guidance in the prompt or an unsuitable seed.

    ✅ To get the best results:

    • Use long, detailed prompts that clearly separate moving vs non-moving parts

    • Try changing the seed if you're seeing unwanted distortion

    You can generate prompts automatically using my custom ComfyUI node:
    🔧 Ollama Describer
    This node uses a vision-capable LLM to generate motion-aware captions. In my case, I used Qwen2.5-VL-7B to generate all motion prompts during training and testing.

    💡 Alternatively, the LTXV Prompt Enhancer from Lightricks' custom node set may also be used for prompt conditioning.

    🧠 Recommended Prompt Template

    Use this with any vision-enabled LLM like Qwen-VL, Gemini, or GPT-4o:

    You are an expert in motion design for seamless animated loops.
    
    Given a single image as input, generate a richly detailed description of how it could be turned into a smooth, seamless animation.
    
    Your response must include:
    
    ✅ What elements **should move**:
    – Hair (e.g., swaying, fluttering)
    – Eyes (e.g., blinking, subtle gaze shifts)
    – Clothing or fabric elements (e.g., ribbons, loose parts reacting to wind or motion)
    – Ambient particles (e.g., dust, sparks, petals)
    – Light effects (e.g., holograms, glows, energy fields)
    – Floating objects (e.g., drones, magical orbs) if they are clearly not rigid or fixed
    – Background **ambient** motion (e.g., fog, drifting light, slow parallax)
    
    🚫 And **explicitly specify what should remain static**:
    – Rigid structures (e.g., chairs, weapons, metallic armor)
    – Body parts not involved in subtle motion (e.g., torso, limbs unless there’s idle shifting)
    – Background elements that do not visually suggest movement
    
    ⚠️ Guidelines:
    – The animation must be **fluid, consistent, and seamless**, suitable for a loop  
    – Do NOT include sudden movements, teleportation, scene transitions, or pose changes  
    – Do NOT invent objects or effects not present in the image  
    – Do NOT describe static features like colors, names, or environment themes  
    – The output must begin with the trigger word: **lvwpr**  
    – Return only the description (no lists, no markdown, no instructions)
    

    🧪 Experimental Status

    This is the first public version of this LoRA for LTXV I2V.
    If I discover new training techniques, better captioning strategies, or improvements in convergence, future versions will be released with higher quality and better performance.

    🙌 Feedback Welcome

    If you create something interesting with this LoRA, feel free to share what you’ve made.
    I’ll be checking community uploads — and if I find your results particularly impressive, I’ll help give them a boost of Civitai buzz 😉

    FAQ

    Comments (22)

    _yumidreamsMay 31, 2025· 9 reactions
    CivitAI

    You seriously need to think about create a Patreon

    NRDX
    Author
    May 31, 2025

    I'll research it and what the benefits would be, but thanks a lot for the suggestion.

    RedRascalJun 1, 2025· 1 reaction
    CivitAI

    Are you using CivitAi for generations or locally installed software? Or perhaps other online models? What are you using to generate these videos?

    NRDX
    Author
    Jun 2, 2025

    Locally by comfyui.

    RedRascalJun 2, 2025

    @Alissonerdx Ah cool, what's your workflow?

    RedRascalJun 2, 2025· 1 reaction

    @Alissonerdx Thank you very much!

    loneillustratorJun 3, 2025· 1 reaction
    CivitAI

    hello, i tried to use this lora, how did you loop? and mine doesnot look live wallpaper like

    NRDX
    Author
    Jun 3, 2025

    You have to see how you are using this lora, I don't loop the videos that I post here, they are purely results that came out of the workflow, to loop you would need to use a workflow where you put the First Frame and the Last Frame and the model will generate the interpolation between these 2 frames, not all models support this.

    loneillustratorJun 4, 2025

    @Alissonerdx how to confirm the lora is working or not? I am so stuck because of that

    FoxbiteJun 4, 2025· 4 reactions
    CivitAI

    Tried the LTX Lora, seems to work very well! Great job

    loneillustratorJun 4, 2025

    how ? can you guide me

    FoxbiteJun 4, 2025· 1 reaction

    @loneillustrator sometimes ltx loras can be tricky. Are you using a prompt enhancer? If so, maybe disable it for the lora. Sometimes a long prompt can drown out the lora trigger. Try using the trigger with a higher weight, or repeat it a few times. I did "lvwpr, lvwpr lvwpr lvwpr, live wallpaper". Make sure your CRF is high enough. I used 30.

    NRDX
    Author
    Jun 4, 2025

    @Foxbite Are you using Guider Advanced? Because when using this node you need to input multiple CFGs based on the sigma list, how did you do to use only one CFG value?

    FoxbiteJun 4, 2025

    @Alissonerdx I was talking about CRF, not CFG. The video compression value

    NRDX
    Author
    Jun 4, 2025· 1 reaction

    @Foxbite ah sorry hehehehe

    loneillustratorJun 5, 2025

    @Foxbite thanks king

    citywalker1127821Jun 15, 2025· 2 reactions
    CivitAI

    The effect is great, but it's missing one very important subtle movement: blinking. Could you add that effect as well?

    NRDX
    Author
    Jun 15, 2025· 1 reaction

    You can try using the checkpoint that I merged with the other live wallpaper loras, I tested it and with the checkpoint the blinking effect always works. Live Wallpaper Fast Fusion - I2V 14B 720P | Wan Video 14B i2v 720p Checkpoint | Civitai

    artishticJun 26, 2025· 3 reactions
    CivitAI

    Thank you sir. Very good work. I use florence with it, and the model does a very good job.

    ultimaniacJul 15, 2025· 1 reaction
    CivitAI

    Doesn't seem to work with VACE models, at least for me. I get a bunch of LORA KEY NOT LOADED errors when running it.

    NRDX
    Author
    Jul 15, 2025· 1 reaction

    This model was not trained for VACE, it was trained for the normal Wan, VACE probably has a different architecture and the model would have to be retrained on it if the architecture is not equivalent to the base Wan.