Resources you need:
πFiles :
For base version
I2V Model : wan2.1_i2v_480p_14B_fp8_e4m3fn.safetensors or wan2.1_i2v_720p_14B_fp8_e4m3fn.safetensors
In models/diffusion_models
CLIP: umt5_xxl_fp8_e4m3fn_scaled.safetensors
in models/clip
For GGUF version
>24 gb Vram: Q8_0
16 gb Vram: Q5_K_M
<12 gb Vram: Q3_K_S
I2V Quant Model : wan2.1-i2v-14b-480p-QX.gguf or wan2.1-i2v-14b-720p-QX.gguf
In models/diffusion_models
Quant CLIP: umt5-xxl-encoder-QX.gguf
in models/clip
CLIP-VISION: clip_vision_h.safetensors
in models/clip_vision
VAE: wan_2.1_vae.safetensors
in models/vae
ANY upscale model:
- Realistic : RealESRGAN_x4plus.pth 
- Anime : RealESRGAN_x4plus_anime_6B.pth 
in models/upscale_models
π¦Custom Nodes :

Description
Add layer skip for improved video quality (less visual glitch, sharper image, better hair and hand detail).
Thanks to synalon973 for helping me do some testing.




