These workflows are licensed under the GNU Affero General Public License, version 3 (AGPLv3) and constitute the "Program" under the terms of the license. If you modify and use these workflows in a networked service, you must make your modified versions available to users interacting with that service, as required by Section 13 of the AGPLv3.
https://www.gnu.org/licenses/agpl-3.0.en.html#license-text
TL;DR: The final result should be an 8 second perfectly looped clip. (Across 3 separate workflows).
Contained in the ZIP are three complementary workflows for progressively building a perfect loop using WAN 2.2 and WAN 2.1 VACE.
Through trial and error, these workflows were designed to give me the most consistent results when creating perfectly looped clips. The default settings are what works best for me and at a processing speed I find acceptable.
The process is as follows:
wan22-1clip-scene-KJ.json
Generate a WAN 2.2 I2V clip from a reference image
Optional prompt extension using Qwen2.5-VL
requires a locally running Ollama server
wan22-1clip-vace-KJ.json
Use clip from 1 in a V2V VACE workflow (WAN 2.1 for now)
last 15 frames of clip 1 become first 15 frames of transition
first 15 frames of clip 1 become last 15 frames of transition
Generates 51 new frames in-between
Optionally generate the prompt using Qwen2.5-VL
requires a locally running Ollama server
wan22-1clip-join.json
clip 1 + clip 2
Upscale to 720p
Smooth upscaled clips using WAN 2.2 TI2V 5B (absurdly fast + quality)
Interpolate to 60fps using GIMM-VFI (swap to RIFE for speed if you want)
Color correct using original reference image
The final result should be an 8 second perfectly looped clip.
There are more notes in the workflows. Please drop a comment if you have questions. They should work out-of-the-box given you have the required custom nodes, latest Comfy, and Pytorch >= 2.7.1. Links to the models used are in the workflow notes.
I opted for KJ-based workflows because Native is slower for me. Select the smallest model quants that fit within your VRAM when sampling (or system RAM), otherwise choose Q8 for the best quality. Be wary of ComfyUi-MultiGPU custom node. For me it's slower than Native, both of which are slower than KJ with basic block swapping.
Description
1. wan22-1clip-scene-KJ-v11.json
added VRAM Debug node before first WAN sampler
fix missing CLIP input in Prompt Extender group
2. wan22-1clip-vace-KJ-v11.json
replaced "Load Video (Upload)" node with "Load Video FFmpeg (Upload)"
added ColorMatch node before final video save
VACE and ColorMatch ref image is first frame from scene
3. wan22-1clip-join-v11.json
correctly interpolates between the last and first frame
swapped default GIMM VFI model from F to R (faster)
attached GIMM VFI seed to workflow seed
replaced "Load Video (Upload)" with "Load Video FFmpeg (Upload)"
added image comparer for smoothing results (32nd frame)
added ColorMatch after smoothing to correct shift due to VAE
removed need to upload reference image for color matching
replaced EasyColorCorrector with some manual nodes from LayerStyle