Updated for Wan 2.2
New version no longer uses a smooth pass.
(There isn't a small 1B version for Wan 2.2 this time.)
This workflow uses primarily GGUF quantized models to reduce vram where possible. The current version runs comfortably on 16GB of vram when using the 4 bit (q4_k_m) models.
Models Needed
Goes into models/unet
Goes into models/text_encoders
Goes into models/vae
4 step lightning loras
Goes into models/loras
Any good upscaler model.
I recommend RealEsrgan_2xPlus
Goes into models/upscale_models
Description
Update for Wan 2.2
No smooth pass this time. For 2.2, there isn't a very small model that would be well suited, and with GGUF quants and the 4 step lora, this new workflow runs faster than the Wan 2.1 versions.
Details
Downloads
684
Platform
CivitAI
Platform Status
Available
Created
10/17/2025
Updated
10/26/2025
Deleted
-
Files
wanI2v_v14.zip
Mirrors
CivitAI (1 mirrors)