ComfyUI Wan 2.2 SVI, I2V and FLF2V Workflow
Any feedback would be appreciated.
SVI Features:
Sage-attention
Frame Interpolation
Upscale
LoRA Loader
I2V & FLF2V Features:
Sage-attention
Frame Interpolation
Upscale
Florence2
LoRA Loader
Color Match
FBCNN
WAN2.2 I2V Model Downloads
Here are the Q8 Version of the High and Low Nosie models:
https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/blob/main/wan2.2_i2v_high_noise_14B_Q8_0.gguf
https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/blob/main/wan2.2_i2v_low_noise_14B_Q8_0.gguf
You might want to download other quantized version from here:
https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/tree/main
other versions:
https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/tree/main/I2V
https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/diffusion_models
Text Encoders Downloads
umt5_xxl_fp8_e4m3fn_scaled.safetensors:
https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/text_encoders
VAE Downloads
https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/tree/main/split_files/vae
CLIP VISION Downloads
LoRA Downloads
SVI LoRAs Download
https://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Stable-Video-Infinity/v2.0
NSFW LoRAs Download
NSFW-22-H-e8.safetensors and NSFW-22-L-e8.safetensors:
https://civarchive.com/models/1307155/wan-22-experimental-wan-general-nsfw-model
Live Wallpaper LoRA Download
livewallpaper_wan22_14b_i2v_low_model_0_1_e26.safetensors:
https://civarchive.com/models/1264662/live-wallpaper-style
Lightning LoRAs Download
https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/tree/main
Upscaler Download
4x_foolhardy_Remacri or realesrganX4plusAnime_v1.pt (for anime) or any other upscaler model:
https://civarchive.com/models/147821/realesrganx4plus-anime-6b
https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri
Custom Nodes for SVI:
https://github.com/ltdrdata/ComfyUI-Manager
https://github.com/rgthree/rgthree-comfy
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
https://github.com/city96/ComfyUI-GGUF
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
Custom Nodes for I2V&FLF2V:
https://github.com/ltdrdata/ComfyUI-Manager
https://github.com/rgthree/rgthree-comfy
https://github.com/Miosp/ComfyUI-FBCNN
https://github.com/Smirnov75/ComfyUI-mxToolkit
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
https://github.com/city96/ComfyUI-GGUF
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/kijai/ComfyUI-Florence2
https://github.com/cubiq/ComfyUI_essentials
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
https://github.com/cubiq/ComfyUI_IPAdapter_plus
Jan. 2, 2026
Added 3Sampler SVI version and minor adjustment of SVI workflow
Jan. 1, 2026
Added SVI version
Sep. 4, 2025
Added 3 KSampler versions
Sep. 3, 2025
Published initial Wan22_FLF2V (FirstLastFrame2Video) Version
Updated Wan22_I2V version
Sep. 2, 2025
Published initial Wan22_I2V (Image2Video) version
Experiment and enjoy!

Description
SVI Workflow Upload
Key Features:
Sageattention
Upscale
Interpolation
LoRA
FAQ
Comments (17)
Good. Does the job, and the layout is good and intuitive.
You need to put NAG (negative attention guidance) between the LightX2V Lora node and Scheduled CFG Guidance nodes. The negative prompt is currently placebo and does nothing with the default CFG 1.0 settings.
Is this for the I2V or FLF2V workflow?
@drsammyd As long as your CFG is 1, as it is with lightx2v, you need NAG for the negative prompt to do anything. Doesn't matter what kind of generation it is. You need two NAG nodes, one for high and one for low. If using SVI, you need them for every video in the chain.
hello)
is version of Comfy important? I cant install itWanImageToVideoSVIPro, UnetLoaderGGUF :(
Hi, you need the nightly version of ComfyUI-KJNodes for the WanImageToVideoSVIPro node since its still very new. For the gguf unet loader you need ComfyUI-GGUF
@Legendaer thanks boss, I think I understand. should I either wait a little longer or install the nightly version of KJ?
Sorry for the stupid questions. For some reason, learning Comfi is really hard for me...really hard(
@mifink94 Well, that depends on how patient you are, since it’s uncertain when the version number will be incremented.
@Legendaer Its clear, thank you very much for help!)
Hello, I'd like to ask why SageAttetion isn't working. Everything works fine in other similar workflows.
I have SageAttetion 2.2.0 and Triton 3.5 installed.
And one more question: how can I speed up the animation? It's too slow.
Try to lower Shift and make the prompt more dense or reduce the video length per segment. As for SageAttention, no idea why it wouldn't work if it works elsewhere.
Also not sure why sageattention wouldn't work for you if it works elsewhere since I haven't had any issues myself. And to speed up the animation, you can try what the other comment suggested. But if it's just for a flat out speed increase just increase the fps. Also, the lightx2v lora tends to reduce movement, so you could either disable it, which would require a lot more steps, or use the 3Sampler workflow, where the first step is without the lora.
@Legendaer I apologize for my inattention...
Everything works, it's just that in automatic mode, I selected options that I don't have in Python.
I only have sage attention 2.2.0 and triton-windows 3.5.1 installed.
I just needed to manually select sageattngk int8pv_fpl6_triton since I don't have the other options...
Now everything works great.
I use DasiwaWAN22I2V14BTastysinV8_q6High.gguf and DasiwaWAN22I2V14BTastysinV8_q6Low.gguf
@Legendaer And yes, I use the 3 KMsampler, I liked it better too.
Thank you for the great workflow!
But is setting "start percentage=0" in every sampler fine?
Isn't it the same as setting "start at=0" in general high and low K samplers?
Excellent stuff, thank you! My next mission in life, acquire more RAM and VRAM. Then I will be complete.
There are some things in life that you can never have enough of :)
