CivArchive
    Preview 119245053

    SVI Extend

    https://github.com/vita-epfl/Stable-Video-Infinity/tree/svi_wan22

    Create videos and extend them seemlessly using SVI.

    Following SVI LoRAs are mandatory:

    HIGH: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_HIGH_lora_rank_128_fp16.safetensors

    LOW: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_LOW_lora_rank_128_fp16.safetensors

    • switch between default behaviour, anchor_samples and end_frames within the same subgraphs

    • connect an image to a part and enable the respective toggles to use end_frames or anchor_samples

    NEW! v3

    Extend existing videos using https://github.com/wallen0322/ComfyUI-Wan22FMLF

    • enable "video extension" toggle inside the settings

    • uses source video resolution by default

      • rescale video using the megapixel slider by enabling "video rescale" toggle

    • use included version of the nodes from inside .zip or download the latest version straight from the git if issues arise

    More info inside the workflow.

    AIO i2v+t2v

    All in One workflow for for basic WAN 2.2 video generation.

    Following features included:

    • Switch seemlessly between 2 and 3 sampler solutions

    • Toggle between i2v or t2v

    • Postprod

      • Facedetailer

        • uses t2v Model + LoRA for inpainting - resources needed included in workflow

      • Toggle between GIMM VFI and RIFE VFI Interpolation

      • Upscale

        • Tensorrt Upscale with Model

        • Basic Video Upscale with Model

        • RTX Video Super Resolution Upscale (insanely fast for decent quality)

      • Frame Clipper

      • Seamless Loops using custom RIFE nodes https://github.com/Artificial-Sweetener/comfyui-WhiteRabbit

    Upscale + Interpolate

    I recommend using this workflow instead of upscaling with the generating workflows, since you never really know what kind of results you get, ending up upscaling a bad video and wasting time. I included toggles so you can't use multiple interpolation or upscale nodes at once by mistake.

    This includes:

    mmaudio

    • added Audio combine node

      • combine audio from an existing video with the generated audio on top

      • generate nsfw audio with the nsfw model and then combine that video with another generated audio track from the base model for background noises

    • removed interpolation for easier and faster audio generation - you have the following options:

      • upload raw unupscaled video to MMAudio Video node and upscaled video to Combine video node

      • upload upscaled video to both nodes but lower custom_width and custom_height of the MMAudio video node to about half for faster generation and to prevent VRAM issues

      • upload raw video to both nodes and upscale afterwards

    Inspired by https://civarchive.com/models/2137833

    Following resources necessary (ComfyUI\models\mmaudio):

    https://huggingface.co/phazei/NSFW_MMaudio/resolve/main/mmaudio_large_44k_nsfw_gold_8.5k_final_fp16.safetensors?download=true

    https://huggingface.co/Kijai/MMAudio_safetensors/resolve/5984623e6b436818c6ff287ef6eec93e3e05aa3f/mmaudio_vae_44k_fp16.safetensors

    https://huggingface.co/Kijai/MMAudio_safetensors/resolve/main/mmaudio_synchformer_fp16.safetensors

    https://huggingface.co/Kijai/MMAudio_safetensors/resolve/5984623e6b436818c6ff287ef6eec93e3e05aa3f/mmaudio_vae_44k_fp16.safetensors

    Description

    Thanks to iLegoLoon for the great idea of using bus nodes in combination with subgraphs for easy video extension! Give him a like!

    https://civitai.com/models/1866565?modelVersionId=2559451

    This approach allows for easy endless extensions by just copying the subgraphs and connecting them accordingly.

    I made this workflow with his idea in mind.

    • tested on ComfyUI 0.3.62 / 0.5 / 0.12.3

    • exposed the variables for compatibility with older ComfyUI Versions (<0.4)

    • added an anchor_samplers switch (more info inside the workflow)

      • This switch allows to use a new reference frame for an extension

        • e.g., a character has it's back towards the viewer but turns around in the extended video. To get the face and features you can insert a frame with the characters face shown.

    • added some usability tweaks and centralized the settings

    Following SVI LoRAs are mandatory:

    HIGH: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_HIGH_lora_rank_128_fp16.safetensors

    LOW: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_LOW_lora_rank_128_fp16.safetensors

    I highly recommend using a seperate workflow for interpolation + upscaling but I included it anyway.

    FAQ

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    930
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/28/2026
    Updated
    4/28/2026
    Deleted
    -

    Files

    wan22I2vComfyuiWorkflow_sviExtend.zip

    Mirrors

    wan22I2vComfyuiWorkflow_sviExtend.zip

    wan22I2vComfyuiWorkflow_sviExtend.zip

    Mirrors

    wan22I2vComfyuiWorkflow_sviExtend.zip

    Mirrors