ComfyUI Wan 2.2 SVI, I2V and FLF2V Workflow
Any feedback would be appreciated.
SVI Features:
Sage-attention
Frame Interpolation
Upscale
LoRA Loader
I2V & FLF2V Features:
Sage-attention
Frame Interpolation
Upscale
Florence2
LoRA Loader
Color Match
FBCNN
WAN2.2 I2V Model Downloads
Here are the Q8 Version of the High and Low Nosie models:
https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/blob/main/wan2.2_i2v_high_noise_14B_Q8_0.gguf
https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/blob/main/wan2.2_i2v_low_noise_14B_Q8_0.gguf
You might want to download other quantized version from here:
https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/tree/main
other versions:
https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/tree/main/I2V
https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/diffusion_models
Text Encoders Downloads
umt5_xxl_fp8_e4m3fn_scaled.safetensors:
https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders
VAE Downloads
https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/vae
CLIP VISION Downloads
LoRA Downloads
SVI LoRAs Download
https://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Stable-Video-Infinity/v2.0
NSFW LoRAs Download
NSFW-22-H-e8.safetensors and NSFW-22-L-e8.safetensors:
https://civarchive.com/models/1307155/wan-22-experimental-wan-general-nsfw-model
Live Wallpaper LoRA Download
livewallpaper_wan22_14b_i2v_low_model_0_1_e26.safetensors:
https://civarchive.com/models/1264662/live-wallpaper-style
Lightning LoRAs Download
https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/tree/main
Upscaler Download
4x_foolhardy_Remacri or realesrganX4plusAnime_v1.pt (for anime) or any other upscaler model:
https://civarchive.com/models/147821/realesrganx4plus-anime-6b
https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri
Custom Nodes for SVI:
https://github.com/ltdrdata/ComfyUI-Manager
https://github.com/rgthree/rgthree-comfy
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
https://github.com/city96/ComfyUI-GGUF
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
Custom Nodes for I2V&FLF2V:
https://github.com/ltdrdata/ComfyUI-Manager
https://github.com/rgthree/rgthree-comfy
https://github.com/Miosp/ComfyUI-FBCNN
https://github.com/Smirnov75/ComfyUI-mxToolkit
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
https://github.com/city96/ComfyUI-GGUF
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/kijai/ComfyUI-Florence2
https://github.com/cubiq/ComfyUI_essentials
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
https://github.com/cubiq/ComfyUI_IPAdapter_plus
Jan. 2, 2026
Added 3Sampler SVI version and minor adjustment of SVI workflow
Jan. 1, 2026
Added SVI version
Sep. 4, 2025
Added 3 KSampler versions
Sep. 3, 2025
Published initial Wan22_FLF2V (FirstLastFrame2Video) Version
Updated Wan22_I2V version
Sep. 2, 2025
Published initial Wan22_I2V (Image2Video) version
Experiment and enjoy!

Description
FLF2V Workflow Upload
Key Features:
Sageattention
Frame Interpolation
Upscale
Florence2
LoRA Loader
Color Match
FBCNN
FAQ
Comments (14)
3 KSamplers would take this to the next level.
Anyway to fix the slow motion results with this workflow?
Why doesn't it use the photos I use?
Good workflow.
Doesn't work too well for loops (same first and last frame), only sometimes with livewallpaper lora.
Also memory leak somewhere, after 4-5 gens on Q8 comfyUI either crashes itself or my PC due to memory allocation errors. I suspect the 'Clean VRAM used' node.
Just change the Upscaler. At least when I use AnimeSharp, it will crash by itself. However, the memory usage is really terrible.
hello i trying to o any generation and it doesnt work, i have this error: "Florence2Run
CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA to enable device-side assertions."
I assume this is happening because your GPU architecture is still new so the standard version of PyTorch might not have the necessary support for the hardware yet. You could try installing the PyTorch Nightly versions/Update Drivers etc.
At the end of the generation, my virtual memory usage suddenly soared from 20GB to 100GB, and the virtual memory will not be released until the end. Is this my problem?
Everything works great. But for some reason the final video wont render. Preview renders fine. But it stops at Preview video and never renders Final video. Anyone have any ideas?
Ok I did some playing around with the nodes. The 3 clean vram nodes are in some weird loop. Follow the connections and use just a single clean vram. This fixed the final video render. Also seems to help with memory issue with High Res Fix...I got a pause using it but it finished. Im running a 4090.
To create an animation that speaks as crisply and clearly as in the image, what should I write in the prompt?
Is there a guide for this one like your other workflows? Sort of looking so something that teaches me how to use wan from the beginning lol
I wouldnt mind a short guide from someone who understands the matter)
heyy i cant get KSamplerAdvanced to detect sageattention for some reason... i installed all nodes and i did all the installations here, i hope u answer
(comfyui\custom_nodes\ComfyUI-KJNodes-main\nodes\model_optimization_nodes.py", line 40, in get_sage_func
from sageattention import sageattn
ModuleNotFoundError: No module named 'sageattention')
