This workflow retrieves the last 15 frames of a video to create a logical sequence, then merges the sequence with your original video.
šFiles :
Recommendation :
>24 gb Vram: base or Q8_0
16 gb Vram: Q5_K_S
<12 gb Vram: Q4_K_S
For base version
VACE Model: wan2.1_vace_14B_fp8_e4m3fn.safetensors or wan2.1_vace_1.3B_fp16.safetensors
In models/diffusion_models
CLIP: umt5_xxl_fp8_e4m3fn_scaled.safetensors
in models/clip
For GGUF version
VACE Quant Model: Wan2.1-VACE-14B-QX_0.gguf
In models/diffusion_models
Quant CLIP: umt5-xxl-encoder-QX.gguf
in models/clip
VAE: wan_2.1_vae.safetensors
in models/vae
ANY upscale model (depreciated):
Realistic : RealESRGAN_x4plus.pth
Anime : RealESRGAN_x4plus_anime_6B.pth
in models/upscale_models
š¦Custom Nodes :

Description
Base version
FAQ
Comments (14)
Hi. is this to increase the duration of video. can you please explain.
I'm having trouble finding the right words to explain in English: but to explain simply, you import a video and the workflow will automatically retrieve the last frame and the frame rate to generate a continuation of the video. Then the continuation is merged with the original.
@UmeAiRTĀ ok. thanks for your response, dear. i got it.
Love the workflow! Thank you
I'm having some problems running the workflow mostly on the upscaling part. always have error on low memory however the spec are GPU 5070 TI / 192 Memory ram xeon CPU š„²at the moment in running the workflow with out scaling
Your settings are too high or you are running too big of a model.
Upscaling long videos requires a lot of VRAM
@UmeAiRTĀ thank for the replay, I find out the problem. I was upscaling the video without limiting the framerate š
thank you very much
Hi mate, thanks for the workflow and the guide. I was using it this morning and it helped me a lot!
Thank you, with pleasure
So, the "OG" output plays at the correct frame rate (16fps), but the EXT version is playing the new portion at double speed for some reason? I'm looking at the nodes and can't see where that is happening.






