Request a Wan2.1 LoRA on our Discord and we will train and open-source it for free.
Join our Discord to generate videos of the Squish Effect LoRA for free.
Wan2.1 14B I2V 480p v1.0:
Trained on 1.5 minutes of video comprised of 20 short clips of things being squished. This was trained on the Wan.21 14B I2V 480p model.
The trigger word is: 'sq41sh squish effect'
See below for some example prompts that have worked very well. I believe the prompt structure can be very similar as the examples are; you just need to specify the object being squished and that is the only difference.
Recommended Settings:
LoRA strength = 1.0
Embedded guidance scale = 6.0
Flow shift = 5.0
Here's a link to the Wan2.1 I2V inference workflow I used to generate these videos: https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_480p_I2V_example_02.json
Prompt Examples:
Example 1: In the video, a miniature rodent is presented. The rodent is held in a person’s hands. The person then presses on the rodent, causing a sq41sh squish effect. The person keeps pressing down on the rodent, further showing the sq41sh squish effect.
Example 2: In the video, a miniature tank is presented. The tank is held in a person’s hands. The person then presses on the tank, causing a sq41sh squish effect. The person keeps pressing down on the tank, further showing the sq41sh squish effect.
As you can see, the prompt structure is identical, it is only the object being squished that changes.
Let me know if there are any questions, I'll be happy to help!
Description
Version 1.0
FAQ
Comments (41)
THIS IS SO COOL
Really appreciate it, I was honestly awe struck by the results and so I will be training a lot more Wan I2V LoRAs, so if you have any other LoRA suggestions I'm all ears!
I am training an AI using musubi-tuner with static images of anime characters, but I have not yet achieved success.
Me too.
This is mind-blowing, well done!
Thanks so much, many more I2V ones coming soon, feel free to leave suggestions for new ones!
Okay you win! This is fantastic and I hope you make more :)
Works fine 😍 👍🏻
This makes me feel something on the inside
May I ask you what software did you use to train this LoRA?
looks amazing - hunyuan please!
Awesome to see the closed source effects being brought to the community.
Wan is so superior in I2V it's not even close, (I'm talking versus Hunyuan)
This lora is awesome!!! may I ask what is the Vram used of your training. I would like to train a wan lora with video as well, but I only have a 4090.
Hello, whats the name of the node to use this lora on? i cant find this info anywhere, the workflow on the description dont have the lora loader node active.
Wan Video Lora Select, had to add it myself as is not on the linked workflow
I had to use this one: https://github.com/kijai/ComfyUI-WanVideoWrapper
But I still haven't gotten it to work. Error after error after error. I fix one error, and another error pops up. Now it finally seems like it's about to work and I'm getting CUDA Out of Memory -_-
Very close to giving up for now.
I've tried multiple times, but i can't seem to get that squeezing effect like in these examples
Prompt Example:
Example 1: In the video, a miniature rodent is presented. The rodent is held in a person’s hands. The person then presses on the rodent, causing a sq41sh squish effect. The person keeps pressing down on the rodent, further showing the sq41sh squish effect.
Just tried it today. Left the prompts as it is with the tank. The image I used was a 3d model of a fairy and despite leaving it as tank I didn't change anything and came out great. Probably wont be doing them often, took about 5.5 hours on my 4070 12gb.
@Network23549 took 20mins on my 4060 8gb vram and 64gb ram, 5.5hr is too much something is wrong
Any tips or suggestions to speed things up? 4070 12gb took about 5.5 hours. Still new to all of this.
should be much faster. In generel look at your VRAM usage. If it is full ( full starts at 90 - 95% ) then it becomes super slow. You can lower the amount of frames and / or lower resolution until you won't get full VRAM anymore and then it should be a matter of minutes ;) Of course fewer steps makes it faster ( go from 30 to 20 for example ). But I think your problem at the moment is the full vram
Dude i have a RTX 3080 and it takes 5 min ...
@WhatTheGuy @artavenue Thanks for the suggestions. It was whatever workflow I was using, I switched to another and it's processing now in about 20 min. Still learning settings But 5 minutes, what workflow are you using?
@Network23549 I'm using the standard ComfyUI workflow. nothing fancy
Thanks for creating this oddly satisfying effected. I implemented this in my discord server as a side project. You can try it free here: https://discord.gg/rgumpJkYc5
When I imported the workflow for squish effect the program informed me that I am missing a lot of nodes Like "Wan video sampler" " "WanVideoTextEncoder" is there some sort of package nodes I have to download? If so can you reply here with the link. When I click on check for missing nodes it it comes with nothing.
Same issue here. I updated ComfyUI as well but still some items are missing
i've been going absolutely insane trying to get this to work as well. ComfyUI definitely deserves an award for making me feel like an absolute cabbagebrained fool every other time I use it
Here is the one that worked for me as far as getting the nodes to appear: https://github.com/kijai/ComfyUI-WanVideoWrapper
follow the instructions based on your setup. After I got all of those working, I wrestled with dozens more errors. After tweaking some settings I think I finally have a generation going that won't be interrupted.
Tried multiple times and with similar looking workflows but can't seem to get all the nodes to load properly
@crickfellas320 it took me forever. seemingly tried the same thing a few times and it worked. what OS are you on? I tried first in Windows and hit too many snags and rebooted into my Linux install
How has no one used this on Trump yet?
because we aren't obsessed like those terminally political
@axicec What's this "we"? Most people can't go five minutes without mentioning his name. Frankly, I could go the rest of my life without hearing it again, and it'd be too soon. But, I won't stop mentioning it in negative connotations until he fucks off with these damn tariffs..
I'd rather use it on Katy Perry. Choose love not hate, homey!
@Frozen_Sparkles Ew?
Touch grass lmao. Get off reddit
@AutoluxAfter No.. It's the middle of winter..
Gimme deez squishies!
Details
Files
squish_18.safetensors
Mirrors
squish_18.safetensors
squish_18.safetensors
105_squish_18.safetensors
105_squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
squish_18.safetensors
Available On (3 platforms)
Same model published on other platforms. May have additional downloads or version variants.