CivArchive
    Wan2.2 I2V SVI Workflow Kenpechi - v2.0
    NSFW

    This is the SVI 2.0 PRO version.

    v3.5 12-section fixed - An error was discovered where the prompt that should have been entered in "Section 8" was incorrectly duplicated with the 6th prompt. This issue has been fixed, so please reinstall the workflow.

    Thank you so much to @MarcanOlsson for discovering the issue!

    v3.5 12-Sec - This version is based on v3.5 and enables video merging through 12 generation steps.

    When actually generating the video, you will notice color shifts compared to 6 generation steps. While 6 steps generally maintain better quality in practice, the ability to specify 12 steps offers advantages depending on the application, so we decided to add this version.

    The 12-Section version enlarges the workflow, so unless you need more than 7 generation steps, I recommend using the standard v3.5.

    v3.5 - The subgraph specification in the model input area has been deprecated and reverted to the v2 specification. Additionally, it is now possible to generate videos for only the first section.

    We received multiple reports from the community that models such as CLIP and VAE were not functioning correctly due to the subgraph, and we also received feedback that the model placement was unclear. Therefore, we decided to revert to the v2 specification.

    However, the subgraph of the generation section, which includes the sampler, remains unchanged. We believe that performing the generation process within the subgraph serves to prevent a decrease in generation quality. While the model area issue is simply a layout issue, the subgraph cannot be removed because it affects the quality of the generation section. If there are issues with the subgraph itself, please avoid using this workflow.

    Regarding the video generation for only the first section, given the nature of SVI, we initially omitted it, believing that a single generation was unnecessary. However, we received feedback from the community requesting that a video be generated for each section, and that videos be added gradually while reviewing the generated videos. This was a very logical approach, so we added the "first video" and modified the workflow to allow videos to be accumulated while keeping the seed value fixed.

    v3.4 - Layout adjustments.

    v3.3 - Changed the seed node from "CR Seed" to "Seed (rgthree)". This change was made to align with commonly used custom nodes in this workflow, following reports of implementation issues with CR Seed.

    v3.2 - Modified the layout to make it easier to disable Lightx2v Lora.

    v3.1 - Modified the layout to make it easier to disable the Sage Attention node.

    v3.0 released.

    Video length can now be changed in each of the six generation sections, providing more flexible control over video content.

    The frame rate (fps) was previously fixed at 16fps, but can now be changed arbitrarily. Accordingly, the RIFE-VFI node's scaling factor can now be changed in the input area.

    GGUF model loader is now included as standard.

    Version 2.0 changed the number of generation sections to six.

    The layout has also been updated, allowing Seed node input to be processed in one place. Furthermore, the layout has been significantly redesigned to unify the user experience with Painter I2V versions, reducing the input burden. With this change, the wildcard prompt input method has been discontinued.

    Please note that the explanations in this workflow are solely my personal opinions. I do not have expertise in AI generation, so some information may be inaccurate.

    The main goal of this workflow is to achieve compact operation when performing repeated generation. It minimizes screen scrolling during operations such as prompt input, input image selection, specifying time, number of steps, resolution, and, most importantly, LORA selection. To further enhance compactness, all nodes are fixed to prevent accidental operation.

    Links to the Models and LORAs and nodes used in this workflow

    SVI LORA :

    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_HIGH_lora_rank_128_fp16.safetensors

    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_LOW_lora_rank_128_fp16.safetensors

    Wan Advanced I2V (Ultimate) :

    https://github.com/wallen0322/ComfyUI-Wan22FMLF

    This node was updated on January 27th, but the version available for installation from ComfyUI Manager may be an older version. While the older version will still work, you won't be able to set "SVI Motion Strength," and you'll likely experience more color misalignment. Therefore, if you can Git clone, we recommend installing the latest version.

    Links to the Basic models of the Wan2.2

    CLIP:

    https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders

    VAE:

    https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/vae

    CLIP Vision :

    https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/clip_vision

    You can generate up to six generations, and each generation is assigned a unique seed value. Normally, click "Randomize Every Time" to display "-1". Generation will be random. In this case, the seed value for each generation will be displayed at the bottom of the screen. If you want to fix the seed value, click the seed value field or enter the seed value directly. For example, you can fix the first and second generations and regenerate the third generation and beyond randomly. However, regenerating a section before the generation section you want to fix will change the final frame, so you cannot fix subsequent sections. As a general rule, regenerate after the section you want to fix.

    By combining the six generated videos, you can create six different types of movement. For example, by generating and combining six videos of different durations, you can create a long video containing six complex movements. This is one of SVI's strengths, enabling complex processing that is impossible with a single generation.

    However, SVI V2.0 PRO also has its drawbacks. Because SVI uses the first image as a reference point, the AI ​​tries to restrict movements that deviate significantly from the reference point. As a result, the movement becomes sluggish and unnatural. Furthermore, this constraint imposed by the reference point also reduces the responsiveness to prompts.

    In short, the use of excellent LORA is essential in SVI. In my experience, movements without LORA are very unnatural, lack impact, and resemble something out of a horror movie. Fortunately, there are many excellent adult-oriented motion LORAs available. However, if you want to create completely original movements, expect it to be difficult with the current version of SVI.

    I hope this workflow helps make video production with SVI more enjoyable.

    Description

    v2.0 -

    The number of generation sections has been changed to 6.

    The layout has been changed to allow for the input of CR Seed nodes in one step. In addition, the layout has been significantly changed to unify the user experience with the Painter I2V version, reducing input stress.

    FAQ

    Comments (16)

    lemon95212Mar 2, 2026
    CivitAI

    50系列会报错

    kenpechi
    Author
    Mar 2, 2026

    我也是50系列的,所以我觉得原因可能不一样。

    gackt2Mar 2, 2026· 1 reaction
    CivitAI

    Thank you for sharing Kenpechi ! Your works are truly inspiring

    lemon95212Mar 4, 2026
    CivitAI

    兄弟你的图是什么生成的啊 分享下可以吗

    kenpechi
    Author
    Mar 4, 2026

    很抱歉,因为我已经删除了用于制作模型的图像,所以无法分享。

    ​不过,除了提示词(Prompt)以外,您可以在我个人资料中的最新图片里确认所使用的模型和 LoRA。请查看使用 "Wai IL Realism" 制作的图像。

    https://civitai.com/models/2233797/wai-realism-illustrious?modelVersionId=2514670

    aureliusMar 4, 2026
    CivitAI

    You could add the TorchCompile node. It gives a significant speedup after an initial slower run.

    kenpechi
    Author
    Mar 4, 2026

    I know about it, but I don't use it because I change the settings frequently and it's a pain to have to go through the initial long generation process multiple times.

    aureliusMar 4, 2026

    @kenpechi I think it's worth it. It only recompiles if you restart ComfyUI.

    kenpechi
    Author
    Mar 4, 2026

    @aurelius Ok, I'll try again.

    mandbzMar 7, 2026· 1 reaction
    CivitAI

    Easy to use great quality. Thank you

    mandbzMar 7, 2026
    CivitAI

    Is there a way to make the model produce video output consistent to the 1st image I upload to the model? I have been having trouble with that. Only fix was to train my own character lora. Just curious if there is other way

    kenpechi
    Author
    Mar 7, 2026

    I think a homemade LORA is the most reliable option, but I've only used random characters, so I'm not very familiar with it. However, I've heard that a model called "Qwen-Image-Edit 2511" is very good at image editing. I think it's probably possible to edit characters with the same appearance to some extent, so why not look into it?

    mandbzMar 7, 2026

    @kenpechi Thank you I'll look into it!

    ArtificialOtakuMar 10, 2026

    @mandbz On that note, try the new Firered1.1, seems to be the king of consistent characters now.

    Copper_MausMar 8, 2026
    CivitAI

    still kinda new to all this, but when i download and inport the workflow, a lot of nodes appear to be missing, how would I fix that?

    kenpechi
    Author
    Mar 8, 2026

    Well, are you a beginner? First, install a node that hasn't been installed with comfyUI manager. If that doesn't work, there are some helpful people who will give you a thorough introduction to comfyUI, so please check them out.

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,885
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/1/2026
    Updated
    5/1/2026
    Deleted
    -

    Files

    wan22I2VSVIWorkflow_v20.zip

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)