CivArchive
    Long videos with SVI 2.2 PRO WanVideoWrapper Workflow - v1.0
    NSFW

    This is a workflow to use the newly released SVI 2.2 Pro (https://github.com/vita-epfl/Stable-Video-Infinity/tree/svi_wan22?tab=readme-ov-file#-model-preparation).

    READ THIS:

    This workflow makes heavy use of WanVideoWrapper nodes from Kijjai: https://github.com/kijai/ComfyUI-WanVideoWrapper You will need the latest version which contains WanVideoSVIPro Embeds. To install it, simply go to custom_nodes and do a git clone of the repo. Or use the extension manager when it's updated.

    Additional nodes:

    • kjnodes

    • VideoHelperSuite

    You will need to get the SVI 2.2 Pro LORAs here: https://github.com/vita-epfl/Stable-Video-Infinity/tree/svi_wan22?tab=readme-ov-file#-model-preparation. Additionally you can use your favorite lightx2v LORAs for I2V.

    The workflow is designed to make it relatively easy to pile more scenes to keep extending the video. You will see two different workflows:

    • The "Same" one: This one loads the model once with a single set of LORAs. This is the easiest to work with but offers less flexibility as you cannot change LORAs for each clip. It is great for extending a scene or when LORAs aren't necessary for each scene. You can go infinitely using this.

    • The "Switch" one: This will allow you to select a different set of LORAs at every clip. But this will be very hungry on your RAM because of how ComfyUI uses the RAM for caching results. Typically you will have trouble going beyond 3 different clips using this method and will need to resume an existing video to keep going. You can use the Save/Load latent nodes when needing to unload the models.

    The workflows are shared with 3 clips (1 start and 2 extension blocks). To extend further you need to do:

    1. Copy the last block including Ref latent and Video Concat groups.

    2. Connect latents of the previous Inference to the prev_samples of WanVideo SVIPro Embeds of the new block.

    3. Connect previous Get Image Size and Count image to the Select last image old node of the new block.

    4. Connect extended_images of the previous Video Concat group to source_images of the new Image Batch Extend With Overlap

    I have included a small interpolation group once your video is done with GIMM VFI but you can use any type of post processing you may prefer.

    Description

    FAQ

    Comments (37)

    BIG_ADec 29, 2025
    CivitAI

    好残忍

    hboxgames132Dec 29, 2025
    CivitAI

    Hi ! Consistency problem seems to get solved little by little on long length !!
    This looks awesomely clean, but I can't use different lora base on each part of the generation.. Would it be possible to make a workflow using Power lora loader (rgthree) so that we can add more loras to the list, and more importantly, use a separate lora pool for each part generated ??
    I struggle to do it myself since the Power lora loader can not be attached directly with your nodes with my newby skills...

    zeronecka
    Author
    Dec 29, 2025· 1 reaction

    The lora loader I use is very similar to power lora loader. For different LORA per segment, use the "Switch" workflow rather than "Same" and modify the Lora High Lora Low on each part (just make sure SVI and lightx2v are always included). Be aware that this will require a lot of RAM the more segment you have.
    If for some reason you need more than 5 LORAs you can chain the Lora nodes by using the "prev_lora" connector.

    For using power lora node you will need to go Native rather than wrapper. Someone posted a SVI workflow in native almost at the same time as mine here.

    willshawn2519Dec 30, 2025
    CivitAI

    Hi, Thank you for great workflow.
    I'd like to ask how to fix below issue:

    When generate 1 clip only and load latent to generate another 5 sec separately, the video seems to start from (last frame - 1) instead of last frame itself.
    I found the stutter when merging 2 separately generated video, which didn't happen when I generated 2~3 clips within a single cue.

    zeronecka
    Author
    Dec 30, 2025

    So since the workflow uses the last latent of a previous clip, you will have an overlap of 5 frames between the 2 clips. You therefore need to trim or blend these frames.
    In the workflow I shared this is done in the Image Batch Extend with Overlap node to achieve the blend.

    willshawn2519Dec 31, 2025

    @zeronecka Thank you for response. So, to resolve what i experience, i need 'source images' and 'new images' connected to Image Batch Extend with Overlap node, right? In my case, which I only run the first video batch x2 times (while bypassing the other 2 video steps), how should i put them in proper format (images)?

    zeronecka
    Author
    Dec 31, 2025· 2 reactions

    @willshawn2519 This node does the blending for you would plug your first clip result in source and the second clip in new images. And I think a value of 5 would do the trick.

    wvagrant00Dec 30, 2025· 1 reaction
    CivitAI

    This worked great. Thanks!

    da_green1977467Dec 30, 2025
    CivitAI

    My output is in slow motion? Maybe I use the wrong lightning Loras?

    zeronecka
    Author
    Dec 30, 2025· 1 reaction

    Could be. I use the 1030 i2v for high and the common wan 2.1 rank 64 for low. Mostly the high that is important.

    noble6919Dec 31, 2025
    CivitAI

    Thank you for posting this workflow! I have a noob question here: when I try to run the "Same" workflow, I get an error with the Get Image Size & Count (Swwan) node.

    Prompt outputs failed validation: Required input is missing: samples GetImageSizeAndCount: - Required input is missing: image GetImageSizeAndCount: - Required input is missing: image GetImageSizeAndCount: - Required input is missing: image SaveLatent: - Required input is missing: samples

    I'm assuming those connections are not attached on purpose, but I'm not sure what could be the issue then.

    zeronecka
    Author
    Dec 31, 2025· 1 reaction

    I loaded the workflow from comfyui to verify. Those nodes should be properly attached... Try to load it again after downloading.
    You should see lines linking those nodes.

    On the other hand if you have nodes deactivated (purple colored) you may encounter that type of error I think. To make sure all the nodes are properly activated select the whole group and use Ctrl+B to deactivate/activate they should not be purple colored. The workflow come with the nodes all deactivated from what I see when loading the json, you need to activate them from top to bottom depending on which step you are at.

    noble6919Jan 2, 2026

    @zeronecka copy that. I'll give it a shot. Thank you for looking into it and getting back to me. Happy new year!

    K3NKJan 1, 2026
    CivitAI

    im trying to do one of my blowjobs and the seconds sampler starts from the start image.. it doesnt hapen with the native samplers.. anyone having this?

    zeronecka
    Author
    Jan 1, 2026

    I could take a look, can you tell me in details which LORA + prompt ? If I get something good I'll post it here

    K3NKJan 1, 2026

    @zeronecka https://civitai.com/models/1874811/ultimate-deepthroat-i2v-wan22-video-lora-k3nk

    with this prompt:

    a cinematic scene with a woman in the frame, a naked man enters from the side, only his lower body is visible, side view of his hips, thighs and legs, with a gigantic erected penis with testicles appears and she starts engaging in a deepthroat blowjob with that penis. She swallows the entire penis, her nose smashes agains the man's hips. She moves her head back and fort swallowing the penis, realistic proportions, natural lighting, cinematic composition

    an image where theres no penis already in the image, i wasnt able to get good results..

    zeronecka
    Author
    Jan 2, 2026

    @K3NK I have added a post (https://civitai.com/posts/25575794) with your LORA and prompt for first scene, slightly modified prompt for 2nd and 3rd steps:

    The woman continues her deepthroat blowjob. She swallows the entire penis, her nose smashes agains the man's hips. She moves her head back and fort swallowing the penis, realistic proportions, natural lighting, cinematic composition

    Had no issue with the "same" workflow with your LORA, it's embeded in the video of the post if you can spot an important difference with your local setup. Didn't spend much time on the base image (took something I had saved).

    VenionJan 2, 2026

    @K3NK Hey K3NK, I used your Lora yesterday in Step 4 and it worked perfectly. Did you accidentally change something in the area where the workflow fetches the last frame image?

    K3NKJan 2, 2026

    @Venion not really, im instegrating it with my png extend method, i think i already have it working, i dont know if really works this way , coz im ecoding to lateng the saved frames.. wrapper is just superior in quality and speed..

    The anchor_samples can mantain penis shape? whis probably my method, but im feeding the last 81 frames, to the prev_samples and the first frame to the anchor_samples, but i find the penis gets thinner similar like when i dont use SVI.. now im generating one with the anchor_samples being one of the frames where the penis is most visible..

    CONFIRMED at least on my workflow, anchor_samples can help mantain penis shape, i guess it can help mantanining character faces? that im not that sure.. xD

    Ponder_StibbonsJan 1, 2026
    CivitAI

    Yet another probe into latent space, wherefore to pluck the forbidden fruits of consistification. Been wanting to give SVI a go. This most definitely works, and it's a nice clean setup, perfect starting point for experimentation. Straight out of the box tests are quite nice. Thanks.

    xCirusXJan 2, 2026
    CivitAI

    Why is the sigma value set to 0.875 in the WanVideo Sigma To Step node? That value is for t2v. For i2v, it should be set to 0.900.

    zeronecka
    Author
    Jan 2, 2026· 1 reaction

    An oversight. Let us know if fixing this really improves the result.

    xCirusXJan 2, 2026

    From the official docs, 0.900 is the boundary for wan 2.2 i2v.

    wan 2.2 t2v is 0.875.

    But also, no one really knows.
    Personally, I'd adjust the value to the i2v number. Up to you at the end of the day.

    Ponder_StibbonsJan 3, 2026· 1 reaction

    I noticed this as well. Definitely need to control variables here when testing. I second adjusting to .9, also promoting and linking subgraph widgets where they can be linked and iterated from the main page.

    SantaonholidaysJan 2, 2026
    CivitAI

    How do i fix it that each clip is making the Contrast higher?

    zeronecka
    Author
    Jan 3, 2026

    First make sure you always have the SVI LORA. Its main purpose is to avoid these types of problems when continuing videos but that being said if the additional LORAs you are using degrade contrast this problem may still occur. You can try to rectify it a bit using a color match.

    xonly207Jan 5, 2026
    CivitAI

    Hi, how do I use the "same" version of this workflow ? it seems like all nodes state were set to bypass and only the interpolation section was active. Do I have to connect anything from Model section to the Loaders section? also do I go in and set all node's state to normal and disable the interpolation at the bottom?

    zeronecka
    Author
    Jan 5, 2026

    Yes, you need to re-activate the nodes from top to bottom. I suggest working on 1 step at a time in case 1 step didn't give you the desired result. You can CTRL+Drag to select multiple nodes and CTRL+B to activate/deactivate.
    Everything up to step 3 should already be connected for you.

    xonly207Jan 6, 2026

    @zeronecka Got it working perfectly. Thank you!

    henry_michJan 6, 2026
    CivitAI

    您好,感谢您分享这个工作流,请问左边输入是model_high,model_low,image_embeds,text_embeds,vae,右侧是image和latents的节点在我的ui里是红色的,他具体是一个什么节点呢?

    zeronecka
    Author
    Jan 6, 2026

    This is a subgraph. Click the icon top right to see the actual nodes in it.

    timklinger93895Jan 10, 2026· 1 reaction
    CivitAI

    I have an issue with face morphing, it's not serioius,, but it's enough, anybody else?

    HawtDayumJan 21, 2026
    CivitAI

    spent hours trying to get this to work, fighting through error after error. Chatgpt tried everything... its just not worth. Thought I could download then use. Glad it worked for others, but gotta try someone elses workflow as this is far too frustrating. thanks anyhow.

    pr1medebauchery573Mar 21, 2026

    Did you ever find an alternative? I've been searching endlessly for a replacement for a broken workflow that uses last image to continue a series and have come up short.

    ryanbai2008200Jan 21, 2026
    CivitAI

    it seems that the new clip is still using the default image as start frame. it always gives ugly blending. is there anyway i could use last frame of last clip as the start frame of next clip?

    kalamees2025Jan 25, 2026
    CivitAI

    Hello!

    How to fix this?

    - **Exception Message:** [GetNode] ✗ Variable 'model_low' not found! Available: vae, steps, text_encoder, init_image, init_latent Tip: Make sure SetNode runs BEFORE GetNode in the graph.

    phillygtips831Feb 28, 2026
    CivitAI

    Unfortunately getting multiple errors with both workflows files

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    3,653
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/29/2025
    Updated
    5/3/2026
    Deleted
    -

    Files

    longVideosWithSVI22PRO_v10.zip

    Mirrors

    Huggingface (1 mirrors)