CivArchive
    WAN 2.2 Perfect Loops - v1.1
    NSFW

    These workflows are licensed under the GNU Affero General Public License, version 3 (AGPLv3) and constitute the "Program" under the terms of the license. If you modify and use these workflows in a networked service, you must make your modified versions available to users interacting with that service, as required by Section 13 of the AGPLv3.

    https://www.gnu.org/licenses/agpl-3.0.en.html#license-text

    TL;DR: The final result should be an 8 second perfectly looped clip. (Across 3 separate workflows).

    Contained in the ZIP are three complementary workflows for progressively building a perfect loop using WAN 2.2 and WAN 2.1 VACE.

    Through trial and error, these workflows were designed to give me the most consistent results when creating perfectly looped clips. The default settings are what works best for me and at a processing speed I find acceptable.

    The process is as follows:

    • wan22-1clip-scene-KJ.json

      • Generate a WAN 2.2 I2V clip from a reference image

      • Optional prompt extension using Qwen2.5-VL

        • requires a locally running Ollama server

    • wan22-1clip-vace-KJ.json

      • Use clip from 1 in a V2V VACE workflow (WAN 2.1 for now)

      • last 15 frames of clip 1 become first 15 frames of transition

      • first 15 frames of clip 1 become last 15 frames of transition

      • Generates 51 new frames in-between

      • Optionally generate the prompt using Qwen2.5-VL

        • requires a locally running Ollama server

    • wan22-1clip-join.json

      • clip 1 + clip 2

        • Upscale to 720p

        • Smooth upscaled clips using WAN 2.2 TI2V 5B (absurdly fast + quality)

        • Interpolate to 60fps using GIMM-VFI (swap to RIFE for speed if you want)

        • Color correct using original reference image

    The final result should be an 8 second perfectly looped clip.

    There are more notes in the workflows. Please drop a comment if you have questions. They should work out-of-the-box given you have the required custom nodes, latest Comfy, and Pytorch >= 2.7.1. Links to the models used are in the workflow notes.

    I opted for KJ-based workflows because Native is slower for me. Select the smallest model quants that fit within your VRAM when sampling (or system RAM), otherwise choose Q8 for the best quality. Be wary of ComfyUi-MultiGPU custom node. For me it's slower than Native, both of which are slower than KJ with basic block swapping.

    Description

    1. wan22-1clip-scene-KJ-v11.json

    added VRAM Debug node before first WAN sampler

    fix missing CLIP input in Prompt Extender group

    2. wan22-1clip-vace-KJ-v11.json

    replaced "Load Video (Upload)" node with "Load Video FFmpeg (Upload)"

    added ColorMatch node before final video save

    VACE and ColorMatch ref image is first frame from scene

    3. wan22-1clip-join-v11.json

    correctly interpolates between the last and first frame

    swapped default GIMM VFI model from F to R (faster)

    attached GIMM VFI seed to workflow seed

    replaced "Load Video (Upload)" with "Load Video FFmpeg (Upload)"

    added image comparer for smoothing results (32nd frame)

    added ColorMatch after smoothing to correct shift due to VAE

    removed need to upload reference image for color matching

    replaced EasyColorCorrector with some manual nodes from LayerStyle

    FAQ

    Comments (35)

    SisanaSep 8, 2025
    CivitAI

    What's new/changed in 1.1? 🤔

    Caravel
    Author
    Sep 8, 2025· 1 reaction

    Bugfixes and mitigations for color drift, mostly.

    SisanaSep 8, 2025· 1 reaction

    @Caravel Oh man, cant wait to try, i've tried a bunch of FLF2V workflows and always end up with the worst colour / brightness / contrast drift that pretty much ruins the point of seamless loops. Downloading now !

    gss3Sep 9, 2025
    CivitAI

    Fantastic results!

    StellarFlowerSep 14, 2025
    CivitAI

    Can you please make a quick tutorial on how to set this up? I am new to comfyui and this just seems crazy advanced for me to follow.

    GsssyikSep 20, 2025

    Literally just drag and drop the workflow. Install the stuff comfyUI warns about. Install an Ollama server (google it) then run the workflow. Download the files the error says are missing and restart comfy. If that seems too much this is probably out of your league atm

    StellarFlowerSep 21, 2025

    @Gsssyik got it to work already, thanks man

    LostcutSep 14, 2025
    CivitAI

    Oh, you've assembled it! Cool, gonna try it one day. Does it still need Wallpaper Lora to use?

    I mean, technically i didn't need it to try what i tried, but without it results were much less consistent

    Caravel
    Author
    Sep 14, 2025· 1 reaction

    You don't need any LoRAs, it should work without them enabled but might require sampling for more steps.

    01hessiangranola851Sep 15, 2025· 1 reaction
    CivitAI

    Thank you for these workflows! Although my use case is different (stitching together multiple first-last frame clips), I think I'll be able to use your VACE and interpolation techniques to solve or mitigate a problem I've been struggling with for awhile.

    The clean workflows and clear writeup are super helpful!

    LumiNamiSep 16, 2025· 2 reactions
    CivitAI

    This workflow is honestly amazing, probably the cleanest seamless loops I’ve seen around here. At first the three-part setup (scene, vace, join) looked a bit intimidating, but once I spent some time with it, everything clicked and felt pretty straightforward. The results are just amazing, really appreciate the effort to put into this WF^^

    (P.S. copied from my review XD)

    Caravel
    Author
    Sep 18, 2025· 1 reaction

    thank you! I'm glad you had success with the workflows.

    omgitsgbSep 25, 2025
    CivitAI

    im confused, how is the wanvideo model loader supposed to load gguf models? they dont show up when i put them in my diffusion models folder and thats where the node points to

    Caravel
    Author
    Sep 26, 2025

    You need to be on nightly ComfyUI and a nightly version of the KJ plugin.

    omgitsgbSep 26, 2025

    @Caravel hmmmmmm im on ComfyUI Manager V3.30.3 with KJ nightly [1.1.6]

    omgitsgbSep 26, 2025· 1 reaction

    @Caravel no clue why, but ComfyUI-WanVideoWrapper was failing to update and I didnt notice. Manually deleted and installed, they appear now. Thanks!

    meowmeow12345Sep 26, 2025
    CivitAI

    Do you know, if I'm trying to use this for 720, if it doesn't match up, do I just need more steps? Cause I tried with like 35 steps but it did not match up o.o;;

    Or is there some other value to play with? It is a live wallpaper but she is moving a little bit, like walking in place pepehmm

    CatzOct 11, 2025
    CivitAI

    On the 2nd workflow, I'm trying to change the Vace model from 2.1 to 2.2 so I can use Wan 2.2 loras for consistant animation that isn't live wallpaper.

    Only model I could find at lower quantization were the Wan 2.2 Fun Vace High and Low 14B. The regular at bf16 seems heavy and should probobly try them, but rip my 3090 https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Fun/VACE.

    I get an error at the WanVideo Vace Encode that
    "The size of tensor a (132) must match the size of tensor b (95) at non-singleton dimension 1"

    Which I thought perhaps it was the vae 2.1 the issue, which I changed to Vae 2.2 to test but same issue. I haven't played with Fun Vace to know what I'm doing, but I have more ease with regular Wan 2.2.

    Not quite sure what else to modify.
    All seamless loop workflows I tried that use pure Wan 2.2 had some burnt color degration over time, which is very apparent when it goes back to 1st frame. This workflow is my last resort.

    Any guidance is appreciated.

    Caravel
    Author
    Oct 11, 2025· 1 reaction

    The 2nd workflow is geared around 2.1 so it'll take more work to swap to 2.2 than just changing model loaders. See the 1st workflow for how the KJ nodes are used to load and sample WAN.

    You can also disable all of the LoRAs without issue (though if you remove the lightx2v one you'll have to tweak the sampler settings). I highly recommend you try running the 2nd workflow as-is without your LoRAs. It's only generating 51 new frames (using first 15/last 15 as a guide), so it's likely that it'll work better than you expect. It will always generate a seamless transition, I just added LiveWallpaper in there because I think it makes it more stable.

    If you absolutely need to apply LoRAs to the VACE model, your best option is to try to find the most similar one trained for 2.1.

    CatzOct 13, 2025

    @Caravel Ah that worked out great actually! I was not expecting to follow the movements without a similar lora.

    I'm somehow having issues with the results with the final output though. Seems the gray area doesn't blend it nicely with the loop of the last-first frame when it merges together. There's a small gap that fades in.

    I'm guessing the issue is from the generation of the gray empty frames on the 2nd workflow. I'm not sure what is the logic on knowing the Blend Target to be 15 when there's 81 frames. Perhaps I could manipulate these values.

    I tried reducing the animation to 3 seconds at 49 frames and modified the values to
    0:(0.0),
    9:(1.0),
    40:(0.0),

    With Blend Target at 9 and batch size at 31, but I'm not sure if this is correct and if the blend should be later than 9 frames. Understanding each steps of these value in the Control Mask would be pretty helpful.

    I also tried using Livewallpaper for test, and still had the same issue.

    Unless my issue is my input footage not matching the exact total frame count.

    I feel like I'm almost there though. This is the best workflow for seamless loop results.

    Caravel
    Author
    Oct 13, 2025· 1 reaction

    @Catz Well done, you are very close. Your logic in the second workflow is correct, 18 control frames + 31 generated for 49 total. You can add preview image nodes to the outputs of both the mask and the input video to see what's really going on.

    Using your numbers, if we want a video of 49 frames with 9 on each end as the control, our breakdown would look like this:

    - 9 frames from end of first video, 9 frames of control mask at full strength
    - 31 gray frames for VACE to fill in, 31 frames in control mask no strength
    - 9 frames from start of first video, 9 frames of control mask at full strength

    The blend target is somewhat arbitrary, I just chose 15 as a default to get a smooth result. You could use 15 control frames with your 49, you'd just have a shorter transition.

    Looking at the video you posted, it looks like what you're missing is in the 3rd workflow you have to tell it how many blending frames you used in the 2nd workflow. There is a "BLEND TARGET" node that defaults to 15. So you just need to change it to 9 so the 3rd workflow correctly blends over the right number of frames.

    CatzOct 13, 2025· 1 reaction

    @Caravel Thanks for the fast answer! I didn't even realise there was a Blend Target node on the 3rd workflow, that does seems to be the issue. This will help a lot to preview the masks before committing to rendering. I had to use the Block swap during renders as my RAM would peak it's 64GB, leaving me having to wait before doing anything else while it render.

    I'll give this another try tomorrow. When everything's working out, I'm planning to bring all 3 workflows into 1 so I can automate various image to video. I'm not sure if you've already created one already. Many nodes are repeated through each workflows and clearing the model, vram, ram between each step of 1-2-3 workflows would only be was is needed between Samplers.

    DetailteufelJan 30, 2026

    @Catz Hey mate, did you succeed?

    CatzJan 30, 2026

    @Detailteufel Hey there, the process took too long and was too precise to achieve the correct frames for the other passes.

    I now use a custom workflow, which I forgot where I got the main part from, but I believe this workflow does the same thing.
    https://civitai.com/models/1823089/dasiwa-wan22-workflows-or-i2v-or-svi-20-or-s2v-or-flf2v-or-audio-or-combine

    The WhiteRabbit node in there also help from color degradation that regular RIFE interpolation node created.

    The main issue that I found for looping with Wan2.2 is that the default model create this fading color contrast issue over time. When it loops back you can see the color difference and the sharpen difference from original frame to last frame. There are models of Wan2.2 merged with Lightning/lightx2v lora and it fix this issue. Also have issues with the regular Wan pack nodes, so I just use a double KSampler with start frame 0 to 2 and then 2nd KSampler start at 2 and ends to 4. Another issue is that it only works for 3secs, but it's good enough for what I need.

    DetailteufelFeb 2, 2026

    @Catz Is it possible to achieve a loop with LTX2 or any other model?

    CatzFeb 3, 2026

    @Detailteufel I haven't experimented with LTX yet, but you would need a node compatible that does first and last image

    LyaraOct 23, 2025
    CivitAI

    Not a chance. While not being a full-on geek admittingly, I've tried quite some things to get this to work on my 3080TI (12gb) with:

    - Switched on Block swap up to 40 even

    - Went to the Q4 model

    - Lowered the steps to the sampler (down to 6)

    - Said even no to LLM being active

    And still, each time it enters WanVideo Sampler I am getting a full-on OOM error. And this is still only workflow#1 of 3. So Q is, is there any way to drain the VRAM usage for a 12gb VRAM? Or is this just not suitable for a sub 16gb VRAM constellation?

    Caravel
    Author
    Oct 24, 2025

    The Q4 GGUFs are still 8-9GB each, leaving you with 3-4GB of VRAM for diffusion. Maxing out the block swap should get you somewhere, make you you have it connected to all of the model loaders.

    fakolonyaApr 26, 2026

    @Caravel I wish there is a LTX 2.3 version of this beautiful workflow with good ram vram management :hopium:

    kreegunlord015Oct 31, 2025
    CivitAI

    It's a bit complicated, but the transition is incredibly smooth. Great workflow but hoping for v2v wan 2.2 on second step

    Joycaption supports other model as well including Qwenvl

    engineX2Nov 1, 2025
    CivitAI

    The kijai workflow causes a lot of OOM errors. For those with 24GB or more of VRAM.

    katana88Jan 12, 2026
    CivitAI

    It took some effort to get everything dialed in, but once it clicked, this combo workflow delivered the smoothest loop I’ve ever generated!

    tsai_aiFeb 5, 2026
    CivitAI

    Does anyone know how to fix this issue? When using workflow 3 with 4x_UltraSharpV2_Lite, I get the following error:

    Error(s) in loading state_dict for RealPLKSR: Missing key(s) in state_dict: "feats.1.norm.weight", "feats.1.norm.bias", "feats.2.norm.weight", .......

    How can this be resolved?

    HoneyphoriaFeb 10, 2026
    CivitAI

    Hi bro! Workflow 2 simply generates similar to my workflow 1 video. I believe it must be video with only 15 frames at the start, 15 at the end and grey part between or something like that? What can i do wrong? I didn't touch anything, workflow is default, just changed width, hight to my clip 1 video.

    loneillustratorMar 26, 2026
    CivitAI

    does this setup not work in 16gb vram? It freezes in mine

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    3,607
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/8/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    wan22PerfectLoops_v11.zip

    Mirrors

    CivitAI (1 mirrors)