It finally happened!
Now there's a way for smoother continuous videos thanks to SVI team and Kijai.
We are at v1.0!
I've updated the workflow to add a few more features;
Video extend option by loading an initial video then converting it to latent that goes into first I2V(WIP)Option to switch between 3 and 2 ksampler phases by setting the initial step
Option to set cfg > 1 if you wanted to disable lightx2v
Images are saved partially in loseless format (use something like VLC to view them) and only loaded again on final merge, if something goes wrong you can merge those files to get a flowing video.
Implemented a bus system to reduce connections. Report if you have any issues but things should work as long as you have the right models and loras selected.
You can set and fix the seed for each part
There are options to upscale and interpolate before final save
Final save happens on main graph so you can preview your output
Slow motion issue probably persists. Couldnt find a consistent solution since when speed up using a third party tool every part becomes faster since they take previous latents as input until everything breaks.
Weak points of most SVI workflows right now is that it references first image in all parts so you might have background warping/chaning shape/textures on switches if the background has changed a lot.
I'll only be updating the workflow if kijai updates the node (there are two merge requests about end frame and better consistency(?)) and/or something breaks. So we can call this a semi final :)
Comfyui compatible SVI lora's;
LightX2V lora's I'm using;
FP32 vae:
Ultra Flux VAE for sharper !"Z Image"! outputs:
https://huggingface.co/Owen777/UltraFlux-v1/blob/main/vae/diffusion_pytorch_model.safetensors
GGUF still seems to be performing better than fp8 scaled in my experience.
Just share your outputs with us folks as well :)
v0.9
Left sampling on (1 + 3 + 3) steps with 4 parts (19s~). Takes around 10mins on my 4070ti with sage + torch compile. Feel free to extend it further if you need.
Everything is GGUF. Patch sage attention and torch compile are disabled by default but you are welcome to enable them back since they speed things up a lot if you have the environment set up.
You can set part specific or common lora's thanks to rgthree power lora node.
Happy generations! \('-')
Description
v0.3
Edit: had to do a few hotfixes. Write a comment if you think something is broken.
Make sure you are running comfyui frontend 1.26.2, I've been told that the linked subgraphs are in the features of the subgraphs and will eventually be working flawless again. Until then run this command to make sure you are on the right version;
.\python_embeded\python.exe -m pip install comfyui_frontend_package==1.26.2Changes;
optimized save function to save parts but only merge them at the end, takes less than %20 of space that was used in previous version
added very subtle temporal motion blur to last frame, transitions look smoother when not much is going on but sharp turns can still happen so it works sometimes and other times you can see the motion change direction. Values could be tweaked further later on
added basic lora example in each part that only affects them
added basic upscale using model support to final save node
removed mp4 converter since most of editing software will support the type, you can use vlc to view part files
changed default resolution to be 480x832 vertical
removed comfyui essentials requirement, used something from the basic data handling instead