🧬 Live Wallpaper Fast Fusion – 8 to 10 Step Edition
Live Wallpaper Fast Fusion is a high-performance merged model that brings together the strengths of:
🎞️ Live Wallpaper LoRAs – two custom LoRAs trained to produce fluid motion, parallax depth, and anime/game-style aesthetics.
⚡ CausVid LoRA – enables ultra-fast video generation in just 8 to 10 steps, while preserving high visual quality (https://github.com/tianweiy/CausVid, Wan21_CausVid_14B_T2V_lora_rank32_v2.safetensors · Kijai/WanVideo_comfy at main)
🎬 AccVid LoRA – improves motion accuracy and dynamics for expressive sequences. (aejion/AccVideo: Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset, Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors · Kijai/WanVideo_comfy at main)
🌌 MoviiGen LoRA – adds cinematic depth and flow to the animation, enhancing visual storytelling. (ZulutionAI/MoviiGen1.1: MoviiGen 1.1: Towards Cinematic-Quality Video Generative Models, Wan21_T2V_14B_MoviiGen_lora_rank32_fp16.safetensors · Kijai/WanVideo_comfy at main)
🧠 Wan I2V 720p (14B) base model – providing strong temporal consistency and high-resolution outputs for expressive video scenes.
There are 4 files for download: fp8 version, GGUF version: Q8, Q6, Q4, in civitai there is no way to put the quantization indication by GGUF in the files, so Q8 is the one marked fp32, Q6 is fp16 and Q4 is nf4.
This fusion results in a versatile and powerful video generation model, capable of producing short cinematic clips (2 to 5 seconds) with smooth, natural motion and rich visual detail. While inspired by live wallpaper aesthetics, the model is designed for short, expressive animations ideal for storytelling, dynamic backgrounds, and ambient scenes.
❗ Do not reapply CausVid, AccVid, or MoviiGen LoRAs — they are already baked into the model and reapplying them may degrade results.
Recommended CFG: 1
🎨 You can safely use additional LoRAs for extra style or effects — feel free to experiment.
🛠️ Suggested Caption Workflow (LLM + Template)
To maximize output quality, you can use any LLM (such as ChatGPT, Gemini, Claude, etc.) with the following prompt template to generate motion-aware captions for your images:
You are an expert in motion design for seamless animated loops.
Given a single image as input, generate a richly detailed description of how it could be turned into a smooth, seamless animation.
Your response must include:
✅ What elements **should move**:
– Hair (e.g., swaying, fluttering)
– Eyes (e.g., blinking, subtle gaze shifts)
– Clothing or fabric elements (e.g., ribbons, loose parts reacting to wind or motion)
– Ambient particles (e.g., dust, sparks, petals)
– Light effects (e.g., holograms, glows, energy fields)
– Floating objects (e.g., drones, magical orbs) if they are clearly not rigid or fixed
– Background **ambient** motion (e.g., fog, drifting light, slow parallax)
🚫 And **explicitly specify what should remain static**:
– Rigid structures (e.g., chairs, weapons, metallic armor)
– Body parts not involved in subtle motion (e.g., torso, limbs unless there’s idle shifting)
– Background elements that do not visually suggest movement
⚠️ Guidelines:
– The animation must be **fluid, consistent, and seamless**, suitable for a loop
– Do NOT include sudden movements, teleportation, scene transitions, or pose changes
– Do NOT invent objects or effects not present in the image
– Do NOT describe static features like colors, names, or environment themes
– Return only the description (no lists, no markdown, no instructions)
Use the output from the LLM directly as your video prompt to ensure motion relevance and temporal coherence.
🎯 Best for:
Short video generation (2–5 seconds)
Anime/game-inspired motion scenes
Ambient motion with parallax, particles, soft light, and floating elements
Fast generation workflows (8 to 10 steps)
🔁 Want to generate true seamless loops?
Check out this community workflow based on Wan 2.1:
👉 WAN 2.1 Seamless Loop Workflow (I2V) on Civitai
⚠️ Disclaimer:
Videos generated using this model are intended for personal, educational, or experimental use only, unless you’ve completed your own legal due diligence.
This model is a merge of multiple research-grade sources, and is not guaranteed to be free of copyrighted or proprietary data.
You are solely responsible for any content you generate and how it is used.
If you choose to use outputs commercially, you assume all legal liability for copyright infringement, misuse, or violation of third-party rights.
When in doubt, consult a qualified legal advisor before monetizing or distributing any generated content.
Description
FAQ
Comments (64)
I intend to make some quantized versions with GGUF for this experimental checkpoint.
My question is: Is this file meant to be used as a LoRA file?
So the proper usage would be: load the wan2.1 base model, and then load your file as a LoRA, right?
In that case, the workflow remains the same as before, correct?
@citywalker1127821 To be used as a base model, you don't need to use Lora or Causvid.
@Alissonerdx will you share the ggufs? a Q6-5 would be perfect
@Kung_fu_porn GGUFs have been added.
@Alissonerdx thanks a lot man!
How did you merge the 2 LoRAs (CausVid + Parallax) into the main WAN model to get this new one?
Thanks and appreciate!
It's very simple, in comfyui there are native nodes for this, first you use a node to load the Wan model using a node called LoadDiffusionModel then you connect to it the Loras models that you want to merge using the node called LoraLoaderModelOnly apply the force you want you can have several of these chained including the one from CausVid (I recommend V2) and at the end of it all connect a node called Model Save and it will save a model for you with those Loras applied.
@Alissonerdx Alright, thanks. The only answer to this question I ever got was on a reddit post of mine. This was the answer:
"https://github.com/kohya-ss/musubi-tuner/blob/main/wan_generate_video.py
and set --save_merged_model"
Is this method of yours kinda the same with the one from above?
Also it would be really appreciated if you can share even a very basic "merge workflow" for WAN because I'm coming from a1111/forge and i don't really know how to do this stuff on ComfyUI :(
@ArtistUndead In reality it wasn't done using a training script, it was done using comfyui itself as I said above, I'll see if I can put together a workflow and send it to you as soon as possible, but it's something very simple even with 4 nodes you can do this.
@Alissonerdx Thanks. Hopefully you'll upload that merging workflow somewhere.
I'm afraid this question is going to come across badly, but it is a genuine question: what is the point of this?
From what I understand, this is just a base checkpoint and a couple of Lora mixed in, which results in a multi-gigabyte file to accomplish the same thing as a few bytes of text i.e. "wan2.1 with CausVid (0.4) and Live Wallpaper (0.6)".
What does this checkpoint provide beyond this?
People who are going to use it need to answer this question, I left it written up there as an experiment, but in my tests the quality generated by the checkpoint is much higher than what I can achieve with Lora alone, I don't know why but I noticed this, but I really don't have any advantages in mind that I can prove, when I find out I'll explain here. Another detail, how many checkpoints are there that were made based on merge? Many others, not for Wan, for Wan the other one I know is the one from vrgamedevgirl that uses the same principle.
its a Fine-Tuned model. it gives you the abilities of the built-in loras, so it leave more space to add more loras on the workflow. in another POV, its an evolution of the main model. those model-lora merges are not easy to be made, that's why not a lot exist. it's hard to fine-tune a base model, so the creators and publishers of those, they deserve only congrats and respect
@Alissonerdx Thank you for the considered response, I really appreciate it!
720P? Oh my god! May I ask what graphics card you're using? Because I have a 3090, and when I tried running 720P before, it took too much time. Could you tell me what graphics card you use and roughly how long it takes you to make one video? I need to gauge if I'd be able to use it.
I have a RTX 5090, 10 steps, resolution: 720x1280, "Prompt executed in 254.29 seconds"
With my 3090 I was able to get 10 steps, 720x1280 in 574.50 secs
@Catz Are you using sage attention?
@Alissonerdx Yes and also TeaCache, which now that I'm thinking about it, I heard CausVid don't need TeaCache?
I am using this Seamless loop workflow, which is perfect for wallpaper loop:
https://civitai.com/models/1426572/wan-21-seamless-loop-workflow-i2v
Haven't been able to integrate CausVid lora to reduce the time on that workflow yet though, but your model does reduce the time by a lot!
@Catz That's cool, thanks for sharing the loop workflow, this specific model already comes with causvid built-in so you don't need to worry about that, anyway 10 minutes is not a long time considering that you generated in 720p, your video was extremely perfect, congratulations really. And yes apparently tea cache doesn't go very well with causvid, now using this checkpoint here I didn't get to test it with tea cache since caus video is built-in in it.
I use your Live Wallpaper 720p frequently so this is gold, thanks!
Might be a silly question, but how do I use this? I tried loading it up as the base model but I am not sure how to proceed from there.
@Alissonerdx it's not the workflow you suggest to use with this checkpoint, is it? It seems to use LTX Video + Lora, not WAN. Please correct me if I'm wrong, I'm a rookie.
@sstststss Oh, I'm sorry I copied the wrong one.
@sstststss fixed
Is there a version available in GGUF format?
Yes, take a look at the files section, there will be 4 files, 1 is an fp8 version and the other 3 are gguf. There are 4 files for download: fp8 version, GGUF version: Q8, Q6, Q4, in civitai there is no way to put the quantization indication by GGUF in the files, so Q8 is the one marked fp32, Q6 is fp16 and Q4 is nf4.
@Alissonerdx you are amazing Sir, thanks so much!
The 14B Self Forcing lora is out and some says it is better than causvid/accvid. Just wonder can it be also incorporated with the model?
Hello!
Could you please tell me which node needs to be installed for PrimitiveStringMultiline? When I run your workflow, it throws an error due to the absence of this node, and installing Crystools doesn’t seem to fix it.
Also, what would be the best content to include in the positive and negative prompts? And is it necessary to add text_b in the StringConcatenate node?
Thank you for your response.
Best regards.
Hello, if you are talking about the loop workflow I think it makes more sense for you to look at the comments on the workflow post, this workflow is not mine, it belongs to someone else, but later I can test it and see if I have the same problem.
This is really impressive, but is there any guide, please? I really have no idea of workflow about i2v
So there is no secret, you can use a workflow like this one IMG to VIDEO simple workflow WAN2.1 | GGUF | LoRA | UPSCALE | TeaCache - 🔴 v2.3 (complete) | Wan Video Workflows | Civitai and instead of Wan I2V 14B you will use this merge, this model is not a Lora, it is a complete model, so it needs to be used as a base model, other than that the number of steps you will change to a value between 8 and 10 and the CFG needs to be 1, sampler can use uni_pc, adjust resolution to 480p (832x480 or the other way around) or 720p (1280x720 or the other way around) according to your machine and image and then just use the workflow, other than that there are loop workflows like this one Wan 2.1 Image to Video Loop | Workflow - v1.5 | Wan Video 14B i2v 720p Workflows | Civitai and that's it, this model already has some Loras built in, the ones that are in the model description, so they do not need to be used again.
@Alissonerdx Thanks Allison and your detail explain, I will definitely try it out later!
Has anyone figured out a way to prevent blinking? I get blinking eyes every time, even with characters with their eyes covered lmao
try this lora Live Wallpaper Style - Wan I2V 14B 720P | Wan Video LoRA | Civitai
In this one user said that the characters didn't blink hehehe
@Alissonerdx do you recommending using this lora with the live wallpaper fast fusion checkpoint or the base wan 2.1 model?
@Pixel_Music_Ai base model, I have not tried using it with the Live Wallpaper model
Is there a way to make motion a little bit faster? Subjects moves very slow
Yes, you can use interpolation, you can also use some speed lora like this one but it is T2V https://civitai.com/models/1698719/high-speed-dynamic (you have to see if it will work well), besides that you can try changing the shift.
example with interpolation and upscale and using a loop workflow.
Wan 2.1 Image to Video Loop | Workflow - v1.0 Showcase | Civitai
Alissonerdx thank youu, probably will try thelora cause i dont work in local, only on tensor or here
What Scheduler / Sampler would you recommend for the checkpoint now that it is mixed with the other enhancement loras? Would the Fast Fusion lora version be the same settings?
Almost feel like you should post a workflow for your live wallpapers models with best settings for loops and HD upscales. I think seedvr2 is the new best upscaler (haven't tested yet)? Been having some good results on this workflow, but it used a mix of other workflows already
https://civitai.com/models/1772470?modelVersionId=2016771
Thanks for updating your models, really appreciated!
Thank you very much Catz for the Buzz :D
Hi Catz, so I don't have a perfect sample and scheduler setup for the live wallpaper Loras, but taking advantage of what you said, I'll try to create a good workflow for this and post it here. In my tests, I use the default in terms of sampler and scheduler: Unipc and Simple. In the case of the Lora I recently posted, there's Lightx2v + Pusa, so you can use LCM + Simple and 4 steps in that case. I still need to do more testing with that Lora. I need to see if it's generating with sufficient motion. If not, I'll upload a new version. I haven't been able to do massive testing.
Yes, I tested SeedVR and even tested the version with GGUF, which is still a PR in the SeedVR repository. I'll see if I can add this option to the workflow I create.
Alissonerdx Awesome thanks so much!
Gives very good results, but took over 10 min on my 3090. It'd be nice to have a cleaner workflow that also takes the start image as the end one with color matching for a perfect endless loop.
Strange, but anyway there are some loop workflows, if you look in the model description you will find the link to a workflow and it will probably be slower if you run it with interpolation, upscale, high resolution, besides the loop itself will condition to reduce the motion and increase blinking and things that are easier to have in a loop, after all it is a loop something that was very difficult to do before these workflows existed.
Hi! Can you make 480p version?
Unfortunately it's not worth the effort, there's a huge cost in terms of GPU and time and it's not worth it since you can use the 720 at a lower resolution or just use the Lora I have for 480p.
Alissonerdx Got it. I just thought it might speed up video generation on my 4060 Ti (16 gb). On average, it takes me 20 minutes to create one animation at a base resolution of 360x480 and with TeaChache enabled.
_Jarvis_ use the quantized version (https://civitai.com/api/download/models/1873761?type=Model&format=GGUF&size=full&fp=nf4) + lightx2v lora with lcm + beta and 4 steps and cfg 1, you will probably generate much faster, or just use the lora that I published last that it already comes with lightx2v built in, then just put the lcm + beta, cfg 1 and 4 steps or 8 if you prefer, or you can also try the lora version for 480p.
Hello, will you make a version for wan2.2?
What prompts do you all use??
the prompt you want, but focused on live wallpaper movements
which model to download for 12gb vram ?
You'll want to get the f8, which will use an average 7-8gb vram, but a f16 will use 14-15gb vram
@Akemisu so from what I understand the model itself will cost 7-8gb vram but the way video gen works there's another model that runs along with it that takes like 4-5 more?