The goal of this lora is to reproduce the video style similar to live wallpaper, for those who play league of legends remember the launcher opening videos, that's the goal, but you can also use it to create your lofi videos :D enjoy.
[Wan2.2 TI2V 5B - Motion Optimized Edition] Trained on 51 curated videos (24fps, 96 frames) for 5,000 steps across 100 epochs with rank 48. Optimized specifically for Wan2.2's unified TI2V 5B dense model and high-compression VAE.
My Workflow (It's not organized, the important thing is that it works hahaha): š® Live Wallpaper LoRA - Wan2.2 5B (Workflow) | Patreon
Loop Workflow: WAN 2.2 5b WhiteRabbit InterpLoop - v1.0 - Hardline | Wan Video Workflows | Civitai
Trigger word: l1v3w4llp4p3r
[Wan2.2 I2V A14B - Full Timestep Edition]
Trained on 301 curated videos (256px, 16fps, 49 frames) for 24 hours using Diffusion Pipe with Automagic optimizer, rank 64. Uses extended timestep range (0-1) instead of standard (0-0.875), enabling compatibility with both Low and High models despite training only on Low model.
Trigger word: l1v3w4llp4p3r
Works excellently with LightX2V v2 (256 rank) for faster inference
[Wan I2V 720P Fast Fusion - 4 (or more) steps]
Wan I2V 720P Fast Fusion combines 2 Live Wallpaper LoRA (1 Exclusive) with Lightx2v, AccVid, MoviiGen and Pusa LoRAs for ultra-fast 4+ steps generation while maintaining cinematic quality.
š Lightx2v LoRA ā accelerates generation by 20x through 4-step distillation, enabling sub 2-minute videos on RTX 4090 with only 8GB VRAM requirements.
š¬ AccVid LoRA ā improves motion accuracy and dynamics for expressive sequences.
š MoviiGen LoRA ā adds cinematic depth and flow to animation, enhancing visual storytelling.
š§ Pusa LoRA ā provides fine-grained temporal control with zero-shot multi-task capabilities (start-end frames, video extension) while achieving 87.32% VBench score.
š§ Wan I2V 720p (14B) base model ā providing strong temporal consistency and high-resolution outputs for expressive video scenes.
[Wan I2V 720P]
The dataset used consists of 149 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 480p and 64 frames with 64 dim (L40s).
Trigger word was used so it needs to be included in the prompt: l1v3w4llp4p3r
[Hunyuan T2V]
The dataset used consists of 529 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 72 frames with 64 dim (multiple RTX 4090).
No captions or activation words were used, the only control you will need to adjust is the lora strength.
Another important note is that it was trained in full blocks, I don't know how it will behave when mixing 2 or more loras, if you want to mix and are not getting a good result, try disabling single blocks.
I recommend using lora strength between 0.2 and 1.2 maximum, resolution 1280x720 or generate at 512 and upscale later, minimum 3 seconds (72 frames + 1).
[LTXV I2V 13b 0.9.7 ā Experimental v1]
The model was trained on 140 curated videos (512px, 24fps, 49 frames), using 250 epochs, 32 dim, and AdamW8bit.
It was trained using Diffusion Pipe with support for LTXV I2V v0.9.7 (13B).
Captions were used and generated with Qwen2.5-VL-7B via a structured prompt format.
This is an experimental first version, so expect some variability depending on seed and prompt detail.
Recommended:
Scheduler: sgm_uniform
Sampler: euler
Steps: 30
ā ļø Long prompts are highly recommended to avoid motion artifacts.
You can generate captions using the Ollama Describer or optionally use the official LTXV Prompt Enhancer.
For more details, see the About this version tab.
------------------------------------------------------------------------------------------------------
For more details see the version description
Share your results.
Description
No captions or activation words were used, the only control you will need to adjust is the lora strength.
FAQ
Comments (71)
One of the best video Lora's I have seen!
Looks incredible!
I would love if there was a LTXV version, as it's possible to set images to any frame with it. That means this could be made loopable by setting the initial image to both start and end frames.
I can try to train a version for LTX.
The basic model of ltxv is much worse than wan, it may be enough for wallpaper lora. But if you want to play in a loop, it's better to simply rewind
Thanks for this model, are you able to share your workflow?
@alissonerdxĀ Thanks!
@alissonerdxĀ Did you ever get Sage Attention to work with your RTX 5090? If so was it on Linux or Windows?
@gatherscasinosĀ Sage attention is working on 5090, but as of the date I had seen it was not yet optimized, I use WSL.
@alissonerdxĀ Great workflow. For the upscale part, the "upscale by" with actually multiple the default number of the upscale model, rather than upscale the video by that amount. For example, using a 2 upscale by and 4x upscaler will result the final resolution to be 8x, rather than 2x.
@LovelaceAĀ I didn't create this workflow, apart from that the upscale part in this workflow that I shared is not being used, I don't use upscale in the examples, I generate everything 720x1280 or 480x832, I'll send the link here to the original workflow.
https://civitai.com/models/1309369?modelVersionId=1529049
Look my posts, amazing Wan and this lora.
very nice, are you generating in 720x1280? using the quantized version of the model? what sampler?
@alissonerdxĀ Yse 720x1280, and dpm++. Just fp8 version.
@alissonerdxĀ I am using Kijai's workflow and his quantified model.
Thanks for the buzzz ! :)
The potential on this is HUGE....I upload a video and really like the result.
By the way, for I2V generation, is there any parameters that can control the movement scale, step/CFG/prompt input? Or it is more of a random process?
Try reducing the LoRa strength a bit. I'm going to train this version again, but with caption + trigger word, not just one trigger word like it was, to see if it makes control easier. I think that might have been the problem with this version, I don't know, I'm still exploring. The 720 version seems a bit more static, but the 480 version generates more movement, that's what I noticed.
For those who want to loop videos, Kijai has just added experimental support for Mobius, which was released yesterday (03/16/2025), experimental because Mobius only released it for CogVideo and VideoCraft, but Kijai made it work with Wan.
Do you know how to add this node in workflow?
The node is called WanVideo Loop Args, but adding it just gives me weird artifacts all over the video and complete distortion for both t2v and i2v.
@KaptainSisayĀ Have you found a solution for keeping the quality steady?
Edit: I've tried various options and all failed. More info on that node here:
https://github.com/kijai/ComfyUI-WanVideoWrapper/issues/265
@CatzĀ Just use start and init image in i2v and use the same one.
@KaptainSisayĀ I'm not exactly sure to understand. I have an image reference for i2v, but you're mentioning and is possibility to specify a 2nd place in the workflow where it ends by that same image? What node would that be connected to?
@KaptainSisayĀ Ohhh I didn't even notice that workflow, I'll give it a shot thanks!
I've also just seen this one that use WanStartEndFrames, which I wonder if the workflow you've linked isn't the same method.
https://civitai.com/models/1426572/wan-21-seamless-loop-test-workflow?modelVersionId=1612446
A new node for loop videos has came out. And sooner Kijai's node will support this feature, based on what he said in the issue. ComfyUI-WanVideoStartEndFrames
Have another question - What do you think is the best way to generate smooth looping animation now?
I tried several ways:
1. Enable the ping pong option in the workflow. Easiest but only applicable to limited scenes. Many motions or background (like falling snows) do not feel right when playing backwards.
2. Generate multiple videos and try to manually trim them. Very time consuming.
3. Use frame interpolation. I had the idea maybe do interpolation to the first few frame and last few frame? Have not tried yet but if the start and end frame are too different it may not work.
4. Use the newly released startend frame node to combine 2 videos with start-end and end-start frame set up. Seems mostly reasonable, but for me the quality for startend frame generation really varies. Need to try a lot of times too.
Unfortunately I don't have an answer to the loop issue, what I can try is to train a Lora controller, specialized in making the loop effect, in fact I can try to do this because so far it is one of the most difficult things to do, Mobius works well but for T2V and not I2V.
@alissonerdxĀ No worries. Whether a lora controller can solve this is doubtful - My first guess is that, if there is one, maybe it can make the video movement more coherent and smooth, but the 1st frame and last frame may not necessarily transit to each other seamlessly. Your live wallpaper lora is already doing that, I guess a lot of live wallpaper training set should be looping animation already.
Maybe it is more about workflow setting rather than lora. For example, After the video with wall paper lora generated, one can plug in another startend frame workflow, connecting the last and 1st frame, then have some frame interpolation to smooth the frame rate and transition. Again the startend frame workflow is less stable at least when I try it. Sometimes the start and end frame are just not really guiding the video generated.....
@LovelaceA You can use start and end frame guidance.
https://github.com/raindrop313/ComfyUI-WanVideoStartEndFrames?tab=readme-ov-file
have fun, it works.
@Daru_22Ā So you mean use the input image as both start and end frame? hmm I will try that again
@LovelaceAĀ yeah, make the guidance make the last from the 1st, remember to delete last frame as it's the same as first. it loops, not always perfect, but it loops
@Daru_22Ā Tried a lot of time but the result with startendframe tend to contain some very flashing frames....Dont know how to improve. Maybe I should expect the new Framepack with startend frame function in the future....?
@LovelaceAĀ Have you tried the VACE? or the Wan2.1-FLF2V model? I haven't tried using LoRa on these models, but it would be interesting to test.
@alissonerdxĀ Thanks for the reply. Yeah I need to do some further testing for FLF and also waiting the Framepack to have first last frame function.....Bright future ahead for sure
@AlissonerdxĀ Sorry for the late reply. Yeah FLF can help to create much better smooth looping animation, but still need some minor adjustment like delete some frame and do some interpolation. I am still testing if the the lora can help with the livewallpaper effect. Based on my limited test, It will not break the animation, but the "slow and subtle" movement effect seems to be much weaker. May need to add the trigger word and try again. BTW I was using the 720P wan lora.
@LovelaceAĀ I need to see if there is a way to train this lora for the FLF version.
Great-quality LoRA with awesome results. I will later try the methods mentioned here to create a loop.
been testing with this on i2v 480p on and off for a week, it works quite well
Tky
1.3B model for v2v por favor
but there is no official V2V, or did you mean T2V?
Wan released the 2.1 Fun model, just wonder the compatability of the lora on the Wan 2.1 Fun?
Wow this is beautiful, what made you choose a rank of 64 instead of something like 32?
I wanted to preserve as much detail as possible based on my dataset, there was no other reason than that and that's why I did the training with rank 64.
I keep getting unwanted exagerated movements of the character and sometimes the background stay still, but most of the time it moves too much.
I tried describing the character, only using the trigger, specifying no movement/fixed camera, different enhance weight.
It seems I have way better results in the 480p, but I want the 720p quality.
I think the only thing I haven't tried was increasing the lora strength higher than 1. Do you have any tips on how to prompt a fixed camera for no movement, but the character moves a bit without glitching or pure luck of the seed?
I didn't have much of this problem, the 480p model is actually much better than the 720p model, but if you look at the images posted by people who used LoRa, there are some examples of the static background, the captions only had the trigger word, I didn't specify the movement, but I'll probably train an improved version of this 720p LoRa. Many things also depend on the model you use, for example, if you use very quantized models, the tendency is to have a much worse result, etc., so it depends on many factors.
@alissonerdxĀ Ah I see, thanks for confirming about the 720p quality difference, I was wondering if it wasn't my settings. Since I need a 1080p landscape version, I tried upscaling the 480p versions, but there are too many artifacts. So I found 720p model best for quality, but it feels like gambling as I have to queue 20+ times to get 1 that doesn't glitch out or has background interfere with character. I'm thinking to trying the Fun version in hope controlnets helps, but a better 720p model just like 480p would be gold!
I use the 14B 720p FP8 e5m2 model for my 3090, which I think is the best I can get as the fp16 and bf16 are too heavy.
I'll try same prompts as others in case I find a pattern that helps, thanks!
The fun1.3B i2v model is very efficient, It can generate loop videos of 1280x720x81 within 3 minutes and 12G VRAM. I think it would be awesome with a Live2D LoRA, would you please train for it?
Yes, I can try to train :D
And we have the native Wan2.1 First Last Frame (FLF) model released......Cannot wait to test if the lora can be applied on that
Very nice. I'm using this a lot.
Is there any way to make people blink less? In my clips they are blinking about once every second.
This is very strange, it could be some configuration in your workflow.
@AlissonerdxĀ I wonder if it's a Wan thing? I just can't seem to make a video without the person blinking all the time.
i can confirm it blinks a lot as well in mine
@loneillustratorĀ It must be a problem in your workflow because look at how many examples do not do this, I'm going to start sharing the workflow to avoid this type of problem.
can you make this lora for LTXV? i really love your lora but my computer is quite weak so it takes a long time to use for wan video
Sure, I will train an LTX version as soon as possible.
@AlissonerdxĀ thank you so much, and i look forward to your lora
did anyone download the live wallpaper plus from here before the maker make it private and exclusive to tensor? DM me
The plus was never a model posted here, I made it for the Tensor.art contest, but I will soon release a better one here on civitai.
@AlissonerdxĀ Long waited!
@AlissonerdxĀ then again someone else asked for an ltxvideo model like almost 3 weeks ago, and they're still waiting when you said you will train it as soon as possible :/ no pressure just dont expect it to be anytime soon
@AndroidXLĀ No pressure? Hehehe I have thousands of things to do, when I say I'm going to train I'm going to train at some point, my fastest possible time can be a long time hehehe for some people, I'm going to train yes but I need to make a more elaborate dataset and that's not something I can do from one hour to the next, and another thing is I don't earn a penny from this and sometimes I spend a lot of money, I don't train like crazy, I'm very selective.
@AlissonerdxĀ is the one on tensor art better?
@floralisĀ I don't think it's much better, I think it's similar to the 480p model but for 720p.
Details
Files
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
Mirrors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
åØęå£ēŗø.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
wan_i2v_livewallpaper_480p_e50_no_captions.safetensors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.