Video Generation on a Laptop
Hello!
This workflow utilizes a few custom nodes from Kijai and other sources to ensure smooth performance on an RTX 3050 Laptop Edition with just 4GB of VRAM. It's optimized to improve generation length, visual quality, and overall functionality.
π§ Workflow Info
This is several ComfyUI workflow capable of running:
2.0-ALL -- Includes all workflows:
Wan2.1 T2V
Wan2.1 I2V
Wan2.1 Vace
Wan2.1 First Frame Last Frame
Funcontrol (experimental)
Funcameraimage (experimental)
Coming soon: Inpainting experimentals get updated
π Results (Performance)
*to be updated
π₯ Video Explainer (Vace edition):
π₯ Installation Guide (V1.8):
π¦ DOWNLOAD SECTION
βοΈ Nodes Used (Install via ComfyUI Manager or links below)
π GGUF
π WanVideoWrapper
π Tiled KSampler
π KJNodes
π Video Helper Suite
π rgthree-comfy
Note: rgthree Only needed for Stack Lora Loader
π¦ Model Downloads
*these are conversions from the original models to run on less VRAM.
π WAN GGUF Models
most versions
π Alternative for Image2Video
Faster/Better quants for i2v
π WAN2.1 1.3B GGUF
fun,inpainting,T2V,Vace
π WAN2.1 Fun-control 14B GGUF
fun-control
π WAN2.1 Fun-Camera-control 14B GGUF
fun-Camera-Control
All these GGUF conversions are done by:
https://huggingface.co/calcuis
https://huggingface.co/QuantStack
*If you cant find the model you are looking for check out there profiles!
π§© Additional Required Files (Do not downlaod from Model Downloads)
π₯ What to Download & How to Use It
β Quantization Tips:
Q_5 β π₯ Best balance of speed and quality
Q_3_K_M β Fast and fairly accurate
Q_2_K β Usable, but with some quality loss
1.3B models β β‘ Super fast, lower detail (good for testing)
14B models β π― High quality, slower and VRAM-heavy
Reminder: Lower "Q" = faster and less VRAM, but lower quality
Higher "Q" = better quality, but more VRAM and slower speed
π§© Model Types & What They Do
Wan Video β Generates video from a text prompt (Text-to-Video)
Wan VACE β Generates video from a single image (Image-to-Video)
Wan2.1 Fun Control β Adds control inputs like depth, pose, or edges for guided video generation
Wan2.1 Fun Camera β Simulates camera movements (zoom, pan, etc.) for dynamic video from static input
Wan2.1 Fun InP β Allows video inpainting (fix or edit specific regions in video frames)
FirstβLast Frame β Generates a video by interpolating between a start and end image
π File Placement Guide
All WAN model
.gguffiles β
Place them in yourComfyUI/models/diffusion_models/folderβ οΈ Always check the model's download page for instructions β
Converted models often list exact folder structure or dependencies
π Helpful Sources:
Installing Triton: https://www.patreon.com/posts/easy-guide-sage-124253103
Common Errors: https://civarchive.com/articles/17240
Reddit Threads:
https://www.reddit.com/r/StableDiffusion/comments/1j1r791/wan_21_comfyui_prompting_tips https://civarchive.com/articles/17240
https://www.reddit.com/r/comfyui/comments/1j1ieqd/going_to_do_a_detailed_wan_guide_post_including
π Performance Tips
To improve speed further, use:
β Xformer
β Sage Attention
β Triton
β Adjust internal settings for optimization
If you have any questions or need help, feel free to reach out!
Hope this helps you generate realistic AI video with just a laptop π
Description
FAQ
Comments (9)
WanImageToVideo
input must be 4-dimensional ..... solutions?
I cant recreate the error. Maybe try a different input image or make sure it's resized properly. If you just opened the workflow maybe you didn't select a image to input could be a few things.
@The_frizzy1Β img are well resized. and image is loaded. maybe it's because I'm on directml AMD gpu...
Asymmetric Tiled KSampler
No module named 'sageattention'
any solutions??
Got this error too. Did you ever figure it out?
I could never get sageattention to work. It's something that you have to install using pip, along with triton.
What you CAN do, is go to that node and disable sageattention.
Is outdate this workflow, and all of these models never load me for this workflow
billions of error node errors triton errors torch errors versions errors fucking errors never finishing
yeah thats how it is most of the time. At this point wan2.1 is outdated especially hard using this workflow with newer versions of comfyUI. I would really recommend you check out wan2.2 I have several videos and Workflows on that too.
