CivArchive

    The goal of this lora is to reproduce the video style similar to live wallpaper, for those who play league of legends remember the launcher opening videos, that's the goal, but you can also use it to create your lofi videos :D enjoy.

    [Wan2.2 TI2V 5B - Motion Optimized Edition] Trained on 51 curated videos (24fps, 96 frames) for 5,000 steps across 100 epochs with rank 48. Optimized specifically for Wan2.2's unified TI2V 5B dense model and high-compression VAE.

    My Workflow (It's not organized, the important thing is that it works hahaha): šŸŽ® Live Wallpaper LoRA - Wan2.2 5B (Workflow) | Patreon


    Loop Workflow: WAN 2.2 5b WhiteRabbit InterpLoop - v1.0 - Hardline | Wan Video Workflows | Civitai

    Trigger word: l1v3w4llp4p3r


    [Wan2.2 I2V A14B - Full Timestep Edition]

    Trained on 301 curated videos (256px, 16fps, 49 frames) for 24 hours using Diffusion Pipe with Automagic optimizer, rank 64. Uses extended timestep range (0-1) instead of standard (0-0.875), enabling compatibility with both Low and High models despite training only on Low model.

    Trigger word: l1v3w4llp4p3r

    Works excellently with LightX2V v2 (256 rank) for faster inference

    [Wan I2V 720P Fast Fusion - 4 (or more) steps]

    Wan I2V 720P Fast Fusion combines 2 Live Wallpaper LoRA (1 Exclusive) with Lightx2v, AccVid, MoviiGen and Pusa LoRAs for ultra-fast 4+ steps generation while maintaining cinematic quality.

    šŸš€ Lightx2v LoRA – accelerates generation by 20x through 4-step distillation, enabling sub 2-minute videos on RTX 4090 with only 8GB VRAM requirements.
    šŸŽ¬ AccVid LoRA – improves motion accuracy and dynamics for expressive sequences.
    🌌 MoviiGen LoRA – adds cinematic depth and flow to animation, enhancing visual storytelling.
    🧠 Pusa LoRA – provides fine-grained temporal control with zero-shot multi-task capabilities (start-end frames, video extension) while achieving 87.32% VBench score.
    🧠 Wan I2V 720p (14B) base model – providing strong temporal consistency and high-resolution outputs for expressive video scenes.

    [Wan I2V 720P]

    The dataset used consists of 149 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 480p and 64 frames with 64 dim (L40s).

    Trigger word was used so it needs to be included in the prompt: l1v3w4llp4p3r

    [Hunyuan T2V]

    The dataset used consists of 529 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 72 frames with 64 dim (multiple RTX 4090).

    No captions or activation words were used, the only control you will need to adjust is the lora strength.

    Another important note is that it was trained in full blocks, I don't know how it will behave when mixing 2 or more loras, if you want to mix and are not getting a good result, try disabling single blocks.

    I recommend using lora strength between 0.2 and 1.2 maximum, resolution 1280x720 or generate at 512 and upscale later, minimum 3 seconds (72 frames + 1).


    [LTXV I2V 13b 0.9.7 – Experimental v1]

    The model was trained on 140 curated videos (512px, 24fps, 49 frames), using 250 epochs, 32 dim, and AdamW8bit.
    It was trained using Diffusion Pipe with support for LTXV I2V v0.9.7 (13B).
    Captions were used and generated with Qwen2.5-VL-7B via a structured prompt format.

    This is an experimental first version, so expect some variability depending on seed and prompt detail.

    Recommended:

    Scheduler: sgm_uniform

    Sampler: euler

    Steps: 30

    You can generate captions using the Ollama Describer or optionally use the official LTXV Prompt Enhancer.

    For more details, see the About this version tab.
    ------------------------------------------------------------------------------------------------------

    For more details see the version description

    Share your results.

    Description

    No captions or activation words were used, the only control you will need to adjust is the lora strength.

    FAQ

    Comments (71)

    futureflixMar 13, 2025Ā· 1 reaction
    CivitAI

    One of the best video Lora's I have seen!

    Supernormal_StimulusMar 13, 2025Ā· 2 reactions
    CivitAI

    Looks incredible!

    I would love if there was a LTXV version, as it's possible to set images to any frame with it. That means this could be made loopable by setting the initial image to both start and end frames.

    NRDX
    Author
    Mar 14, 2025Ā· 3 reactions

    I can try to train a version for LTX.

    gamil2876727783788Mar 17, 2025

    The basic model of ltxv is much worse than wan, it may be enough for wallpaper lora. But if you want to play in a loop, it's better to simply rewind

    gatherscasinosMar 14, 2025Ā· 1 reaction
    CivitAI

    Thanks for this model, are you able to share your workflow?

    NRDX
    Author
    Mar 14, 2025Ā· 2 reactions
    gatherscasinosMar 14, 2025

    @alissonerdxĀ Thanks!

    gatherscasinosMar 15, 2025

    @alissonerdxĀ Did you ever get Sage Attention to work with your RTX 5090? If so was it on Linux or Windows?

    NRDX
    Author
    Mar 15, 2025

    @gatherscasinosĀ Sage attention is working on 5090, but as of the date I had seen it was not yet optimized, I use WSL.

    LovelaceAMar 16, 2025

    @alissonerdxĀ Great workflow. For the upscale part, the "upscale by" with actually multiple the default number of the upscale model, rather than upscale the video by that amount. For example, using a 2 upscale by and 4x upscaler will result the final resolution to be 8x, rather than 2x.

    NRDX
    Author
    Mar 16, 2025Ā· 1 reaction

    @LovelaceAĀ I didn't create this workflow, apart from that the upscale part in this workflow that I shared is not being used, I don't use upscale in the examples, I generate everything 720x1280 or 480x832, I'll send the link here to the original workflow.

    https://civitai.com/models/1309369?modelVersionId=1529049

    gamil2876727783788Mar 14, 2025Ā· 3 reactions
    CivitAI

    Look my posts, amazing Wan and this lora.

    NRDX
    Author
    Mar 14, 2025

    very nice, are you generating in 720x1280? using the quantized version of the model? what sampler?

    gamil2876727783788Mar 17, 2025

    @alissonerdxĀ Yse 720x1280, and dpm++. Just fp8 version.

    gamil2876727783788Mar 17, 2025Ā· 1 reaction

    @alissonerdxĀ I am using Kijai's workflow and his quantified model.

    Elliryk2Mar 15, 2025Ā· 2 reactions
    CivitAI

    Thanks for the buzzz ! :)

    LovelaceAMar 16, 2025Ā· 3 reactions
    CivitAI

    The potential on this is HUGE....I upload a video and really like the result.

    By the way, for I2V generation, is there any parameters that can control the movement scale, step/CFG/prompt input? Or it is more of a random process?

    NRDX
    Author
    Mar 16, 2025Ā· 1 reaction

    Try reducing the LoRa strength a bit. I'm going to train this version again, but with caption + trigger word, not just one trigger word like it was, to see if it makes control easier. I think that might have been the problem with this version, I don't know, I'm still exploring. The 720 version seems a bit more static, but the 480 version generates more movement, that's what I noticed.

    NRDX
    Author
    Mar 17, 2025Ā· 9 reactions
    CivitAI

    For those who want to loop videos, Kijai has just added experimental support for Mobius, which was released yesterday (03/16/2025), experimental because Mobius only released it for CogVideo and VideoCraft, but Kijai made it work with Wan.

    https://github.com/kijai/ComfyUI-WanVideoWrapper

    yh344781Mar 18, 2025

    Do you know how to add this node in workflow?

    3621282Mar 18, 2025Ā· 1 reaction

    The node is called WanVideo Loop Args, but adding it just gives me weird artifacts all over the video and complete distortion for both t2v and i2v.

    CatzMar 28, 2025

    @KaptainSisayĀ Have you found a solution for keeping the quality steady?

    Edit: I've tried various options and all failed. More info on that node here:
    https://github.com/kijai/ComfyUI-WanVideoWrapper/issues/265

    3621282Apr 2, 2025

    @CatzĀ Just use start and init image in i2v and use the same one.

    CatzApr 3, 2025

    @KaptainSisayĀ I'm not exactly sure to understand. I have an image reference for i2v, but you're mentioning and is possibility to specify a 2nd place in the workflow where it ends by that same image? What node would that be connected to?

    CatzApr 3, 2025

    @KaptainSisayĀ Ohhh I didn't even notice that workflow, I'll give it a shot thanks!

    I've also just seen this one that use WanStartEndFrames, which I wonder if the workflow you've linked isn't the same method.
    https://civitai.com/models/1426572/wan-21-seamless-loop-test-workflow?modelVersionId=1612446

    gamil2876727783788Mar 20, 2025Ā· 2 reactions
    CivitAI

    A new node for loop videos has came out. And sooner Kijai's node will support this feature, based on what he said in the issue. ComfyUI-WanVideoStartEndFrames

    LovelaceAMar 24, 2025Ā· 1 reaction
    CivitAI

    Have another question - What do you think is the best way to generate smooth looping animation now?

    I tried several ways:

    1. Enable the ping pong option in the workflow. Easiest but only applicable to limited scenes. Many motions or background (like falling snows) do not feel right when playing backwards.

    2. Generate multiple videos and try to manually trim them. Very time consuming.

    3. Use frame interpolation. I had the idea maybe do interpolation to the first few frame and last few frame? Have not tried yet but if the start and end frame are too different it may not work.

    4. Use the newly released startend frame node to combine 2 videos with start-end and end-start frame set up. Seems mostly reasonable, but for me the quality for startend frame generation really varies. Need to try a lot of times too.

    NRDX
    Author
    Mar 25, 2025Ā· 1 reaction

    Unfortunately I don't have an answer to the loop issue, what I can try is to train a Lora controller, specialized in making the loop effect, in fact I can try to do this because so far it is one of the most difficult things to do, Mobius works well but for T2V and not I2V.

    LovelaceAMar 26, 2025

    @alissonerdxĀ No worries. Whether a lora controller can solve this is doubtful - My first guess is that, if there is one, maybe it can make the video movement more coherent and smooth, but the 1st frame and last frame may not necessarily transit to each other seamlessly. Your live wallpaper lora is already doing that, I guess a lot of live wallpaper training set should be looping animation already.

    Maybe it is more about workflow setting rather than lora. For example, After the video with wall paper lora generated, one can plug in another startend frame workflow, connecting the last and 1st frame, then have some frame interpolation to smooth the frame rate and transition. Again the startend frame workflow is less stable at least when I try it. Sometimes the start and end frame are just not really guiding the video generated.....

    Daru_22Apr 14, 2025Ā· 1 reaction

    @LovelaceA You can use start and end frame guidance.
    https://github.com/raindrop313/ComfyUI-WanVideoStartEndFrames?tab=readme-ov-file
    have fun, it works.

    LovelaceAApr 14, 2025

    @Daru_22Ā So you mean use the input image as both start and end frame? hmm I will try that again

    Daru_22Apr 14, 2025

    @LovelaceAĀ yeah, make the guidance make the last from the 1st, remember to delete last frame as it's the same as first. it loops, not always perfect, but it loops

    LovelaceAApr 19, 2025

    @Daru_22Ā Tried a lot of time but the result with startendframe tend to contain some very flashing frames....Dont know how to improve. Maybe I should expect the new Framepack with startend frame function in the future....?

    NRDX
    Author
    Apr 19, 2025

    @LovelaceAĀ Have you tried the VACE? or the Wan2.1-FLF2V model? I haven't tried using LoRa on these models, but it would be interesting to test.

    LovelaceAApr 19, 2025Ā· 1 reaction

    @alissonerdxĀ Thanks for the reply. Yeah I need to do some further testing for FLF and also waiting the Framepack to have first last frame function.....Bright future ahead for sure

    LovelaceAApr 28, 2025

    @AlissonerdxĀ Sorry for the late reply. Yeah FLF can help to create much better smooth looping animation, but still need some minor adjustment like delete some frame and do some interpolation. I am still testing if the the lora can help with the livewallpaper effect. Based on my limited test, It will not break the animation, but the "slow and subtle" movement effect seems to be much weaker. May need to add the trigger word and try again. BTW I was using the 720P wan lora.

    NRDX
    Author
    Apr 29, 2025

    @LovelaceAĀ I need to see if there is a way to train this lora for the FLF version.

    EechiZeroMar 24, 2025Ā· 3 reactions
    CivitAI

    Great-quality LoRA with awesome results. I will later try the methods mentioned here to create a loop.

    nvhMar 24, 2025Ā· 3 reactions
    CivitAI

    been testing with this on i2v 480p on and off for a week, it works quite well

    NRDX
    Author
    Mar 25, 2025Ā· 1 reaction

    Tky

    Polymath_wtfMar 25, 2025Ā· 1 reaction
    CivitAI

    1.3B model for v2v por favor

    NRDX
    Author
    Mar 25, 2025

    but there is no official V2V, or did you mean T2V?

    LovelaceAMar 30, 2025Ā· 1 reaction
    CivitAI

    Wan released the 2.1 Fun model, just wonder the compatability of the lora on the Wan 2.1 Fun?

    NRDX
    Author
    Apr 1, 2025Ā· 1 reaction

    I haven't tested Fun yet, but I don't think it should work. I'll test it and let you know.

    LovelaceAApr 1, 2025

    @alissonerdxĀ Thanks for the reply. Yeah I am curious to see the result.

    AndroidXLApr 1, 2025Ā· 1 reaction
    CivitAI

    Wow this is beautiful, what made you choose a rank of 64 instead of something like 32?

    NRDX
    Author
    Apr 1, 2025

    I wanted to preserve as much detail as possible based on my dataset, there was no other reason than that and that's why I did the training with rank 64.

    CatzApr 2, 2025Ā· 1 reaction
    CivitAI

    I keep getting unwanted exagerated movements of the character and sometimes the background stay still, but most of the time it moves too much.

    I tried describing the character, only using the trigger, specifying no movement/fixed camera, different enhance weight.

    It seems I have way better results in the 480p, but I want the 720p quality.


    I think the only thing I haven't tried was increasing the lora strength higher than 1. Do you have any tips on how to prompt a fixed camera for no movement, but the character moves a bit without glitching or pure luck of the seed?

    NRDX
    Author
    Apr 2, 2025Ā· 1 reaction

    I didn't have much of this problem, the 480p model is actually much better than the 720p model, but if you look at the images posted by people who used LoRa, there are some examples of the static background, the captions only had the trigger word, I didn't specify the movement, but I'll probably train an improved version of this 720p LoRa. Many things also depend on the model you use, for example, if you use very quantized models, the tendency is to have a much worse result, etc., so it depends on many factors.

    CatzApr 3, 2025Ā· 1 reaction

    @alissonerdxĀ Ah I see, thanks for confirming about the 720p quality difference, I was wondering if it wasn't my settings. Since I need a 1080p landscape version, I tried upscaling the 480p versions, but there are too many artifacts. So I found 720p model best for quality, but it feels like gambling as I have to queue 20+ times to get 1 that doesn't glitch out or has background interfere with character. I'm thinking to trying the Fun version in hope controlnets helps, but a better 720p model just like 480p would be gold!

    I use the 14B 720p FP8 e5m2 model for my 3090, which I think is the best I can get as the fp16 and bf16 are too heavy.

    I'll try same prompts as others in case I find a pattern that helps, thanks!

    bbaudioApr 5, 2025Ā· 7 reactions
    CivitAI

    The fun1.3B i2v model is very efficient, It can generate loop videos of 1280x720x81 within 3 minutes and 12G VRAM. I think it would be awesome with a Live2D LoRA, would you please train for it?

    NRDX
    Author
    Apr 5, 2025Ā· 5 reactions

    Yes, I can try to train :D

    LovelaceAApr 18, 2025Ā· 3 reactions
    CivitAI

    And we have the native Wan2.1 First Last Frame (FLF) model released......Cannot wait to test if the lora can be applied on that

    AImaxtroMay 5, 2025Ā· 1 reaction
    CivitAI

    Very nice. I'm using this a lot.

    Is there any way to make people blink less? In my clips they are blinking about once every second.

    NRDX
    Author
    May 6, 2025

    This is very strange, it could be some configuration in your workflow.

    AImaxtroMay 6, 2025

    @AlissonerdxĀ I wonder if it's a Wan thing? I just can't seem to make a video without the person blinking all the time.

    loneillustratorJun 4, 2025Ā· 1 reaction

    i can confirm it blinks a lot as well in mine

    NRDX
    Author
    Jun 4, 2025

    @loneillustratorĀ It must be a problem in your workflow because look at how many examples do not do this, I'm going to start sharing the workflow to avoid this type of problem.

    tearheroMay 9, 2025Ā· 3 reactions
    CivitAI

    can you make this lora for LTXV? i really love your lora but my computer is quite weak so it takes a long time to use for wan video

    NRDX
    Author
    May 9, 2025Ā· 4 reactions

    Sure, I will train an LTX version as soon as possible.

    tearheroMay 10, 2025

    @AlissonerdxĀ thank you so much, and i look forward to your lora

    AndroidXLMay 21, 2025Ā· 1 reaction
    CivitAI

    did anyone download the live wallpaper plus from here before the maker make it private and exclusive to tensor? DM me

    NRDX
    Author
    May 23, 2025Ā· 7 reactions

    The plus was never a model posted here, I made it for the Tensor.art contest, but I will soon release a better one here on civitai.

    LovelaceAMay 25, 2025

    @AlissonerdxĀ Long waited!

    AndroidXLMay 25, 2025

    @AlissonerdxĀ then again someone else asked for an ltxvideo model like almost 3 weeks ago, and they're still waiting when you said you will train it as soon as possible :/ no pressure just dont expect it to be anytime soon

    NRDX
    Author
    May 25, 2025Ā· 3 reactions

    @AndroidXLĀ No pressure? Hehehe I have thousands of things to do, when I say I'm going to train I'm going to train at some point, my fastest possible time can be a long time hehehe for some people, I'm going to train yes but I need to make a more elaborate dataset and that's not something I can do from one hour to the next, and another thing is I don't earn a penny from this and sometimes I spend a lot of money, I don't train like crazy, I'm very selective.

    floralisMay 26, 2025

    @AlissonerdxĀ is the one on tensor art better?

    NRDX
    Author
    May 26, 2025

    @floralisĀ I don't think it's much better, I think it's similar to the 480p model but for 720p.