CivArchive
    Wan 2.2 – SVI Pro with LoRA - v2.1
    NSFW

    This is a workflow for Wan 2.2 SVI Pro with LoRa. It has a "loop" which allows you to set the number of passes without having to restructure the workflow. Only the section with the LoRa files needs to be expanded if you need more than one LoRa file per section or a higher total number of LoRa files.

    Model Links:

    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_HIGH_lora_rank_128_fp16.safetensors

    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_LOW_lora_rank_128_fp16.safetensors

    Used Models:

    https://civarchive.com/models/2190659/dasiwa-wan-22-i2v-14b-tastysin-v8-or-lightspeed-or-gguf

    Alternatively:

    https://huggingface.co/jayn7/WAN2.2-I2V_A14B-DISTILL-LIGHTX2V-4STEP-GGUF/tree/main

    https://huggingface.co/jayn7/WAN2.2-I2V_A14B-DISTILL-LIGHTX2V-4STEP-GGUF/tree/main/high_noise_1030?show_file_info=high_noise_1030%2Fwan2.2_i2v_A14b_high_noise_lightx2v_4step_1030-Q6_K.gguf

    https://huggingface.co/jayn7/WAN2.2-I2V_A14B-DISTILL-LIGHTX2V-4STEP-GGUF/tree/main/low_noise?show_file_info=low_noise%2Fwan2.2_i2v_A14b_low_noise_lightx2v_4step-Q6_K.gguf

    Setup:

    1. Do your main settings

    2. Write your prompts

    3. Select your loRas (if needed)

    Go for it!

    Info:

    This workflow is configured for 16 GB of VRAM. If you have less, increase the "block_to_swap" value or use a higher quantization model.

    Description

    Essentially the same workflow like v2.0, but with more customization options.

    Color Correction
    Color Match
    Upscale with Model
    Image Sharpening
    Improved presets for faster video creation

    FAQ

    Comments (17)

    acedelgado143Feb 7, 2026
    CivitAI

    You know Kijai made a mult-select lora node for wanvideowrapper a while ago.... you could just link 2-3 of those together and have 8 or 12 loras on demand without that ungodly spaghetti monster and a separate bypasser toggle.

    Thalion7
    Author
    Feb 7, 2026

    Well, i always disable the lines in ComfyUI. But does the LoRa loader from Kija knows how to separate each LoRa in each Pass? You can enable and disable but you also need to tell the workflow which pass is running at the moment. I have to check this out.

    mmikemiller823390Feb 7, 2026
    CivitAI

    Please tell me how to get a node "WanVideoSVIproEmbeds"?I updated all the nodes in ComfyUI, but this node from WanVideoWrapper isn't updating. Plz Help

    Thalion7
    Author
    Feb 8, 2026

    Update the WanVideoWrapper.

    Close ComfyUI. Go to your ComfyUI folder. Then go to the custom_nodes folder. Move the "ComfyUI-WanVideoWrapper" folder. You can actually delete it, but move it for safety. Then open your terminal and go to the custom_nodes folder. On Linux, simply click on the custom_nodes folder and select "Open in Terminal." Windows should be similar. Then enter this command: "git clone https://github.com/kijai/ComfyUI-WanVideoWrapper.git". If that doesn't work, go to this page: "https://github.com/kijai/ComfyUI-WanVideoWrapper" and download the ZIP file, then extract it to your "custom_nodes" folder. Now start ComfyUI. If it still doesn't work, update ComfyUI. And always make a backup of your virtual environment (venv) whenever possible.

    bionovafood863Mar 11, 2026
    CivitAI

    Hello, I can't make a video longer than 5 seconds. It gives me an error....

    Error in WanVideoSVIProEmbeds - mask size mismatch

    PROBLEM: The node expects a mask of [1, 24, 4, 120, 80] (950400 elements), but receives data of a different size.

    How can I make it 7 or 8 seconds?

    Thalion7
    Author
    Mar 12, 2026

    Did you try my preset example?

    Is your resolution divisible by 16?

    Have you read all the information in the red boxes?

    Have you updated all the required nodes and ComfyUI?

    Shinigami0407Mar 23, 2026
    CivitAI

    I don't get it, whenever I try any workflow I encounter so many errors and after fixing some I get stuck at one. It's so frustrating

    Shinigami0407Mar 23, 2026

    This one specifically:

    torch._inductor.exc.TritonMissing: Cannot find a working triton installation. Either the package is not installed or it is too old. More information on installing Triton can be found at: https://github.com/triton-lang/triton

    Thalion7
    Author
    Mar 23, 2026

    @Shinigami0407 You need to install trition. In Linux you won't have this problems. So you are in Windows. Google "install sageattention and triton comfyui". Chat GPT will also be helpful.

    Shinigami0407Mar 24, 2026

    @Thalion7 I did try that, but I wasn't able to fix the issue. I will see if I can resolve it somehow. Thank you for your quick answer.

    dewiantMar 26, 2026
    CivitAI

    Hi, may I ask for your help? I've tried to follow your comments, installed models recommended in the work flow, but I have a reoccurring error that I can't solve. Do you know where should I look for the root cause?

    Error message:

    RuntimeError: shape '[1, 31, 4, 90, 60]' is invalid for input of size 685800

    Ive already tried to change the picture resolution, input picture HxW, upscale video settings... Always made sure that each dimension is divisible by 16.

    Im out of ideas now :(

    dewiantMar 26, 2026

    OK, I guess that's something with my settings.

    Now I'm trying your sample and it's progressing, more or less...

    With sample I'm stuck on CUDAerrorMemoryAllocation

    I do have 12Gb vram instead of 16 but I've increased blocks_to_swap as suggested as well as lowered input image resolution... Hmmm. I'll have to work on that more.

    Thalion7
    Author
    Mar 26, 2026

    Did you update all nodes and ComfyUI? If not, make a backup first and update.

    dewiantMar 26, 2026

    @Thalion7 yeye, all nodes are up to date according to ComfyUI manager. I've found my first rookie mistake. Downloaded wrong model. Q8 omfg... Now I'm downloading Q4 and will test with that one.

    Sorry for trouble :(

    Thalion7
    Author
    Mar 26, 2026

    @dewiant No problem. :-)

    Thalion7
    Author
    Mar 26, 2026

    @dewiant But maybe you will have more fun with my LTX 2.3 workflow. WAN still makes the slightly better animations. But LTX is faster, higher res and voice input. So it makes more fun to use it :-)

    dewiantMar 26, 2026

    @Thalion7 I guess I'll check that out next week, if I'll have more time :) thanks for the tip!

    I still have the CUDA memory error here tho. My card is 4070 super with 12gb vram. Currently used files by me are:

    DasiwaWAN22I2V14B v8 q4 (high and low)

    SVI_v2_PRO_Wan2. 2-I2V-A14B Lora fp16 (high and low)

    Wan2_1_VAE_bf16

    umt5-xxl-enc-bf16

    Settings are:

    Input: 480x720

    Frames: 81

    Frame rate: 12

    Blocks_to_swap: 30

    What am I setting wrong that is keeps showing:

    Cuda error: out of memory

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,321
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/6/2026
    Updated
    5/17/2026
    Deleted
    -

    Files

    wan22SVIProWithLora_v21.zip

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)