Workflow for Hunyuan video that can generate a small resolution video first very quickly, then upscales it with Hunyuan v2v when you find one you like. There is a third step for upscaling and video interpolation.
Version 1.5 uses the fast video lora to generate the first video in 7 steps, significantly increasing the speed of the first generation without compromising the second.
Version 1.6 uses a TeaCache sampler to increase generation speed by 1.6 and optionally by 2.1 with worse quality.
Version 1.7 adds Wavespeed, which has increased speed for me by about 15 %. To use it you will need to clone the wavespeed repo in custom nodes. Some wavespeed functionality requires installing triton, but if you only use the "Apply first block cache" node you may not need it.
If you already have a video you simply want to upscale, you can connect the muted load video node to the top left connection in the "Upscale and Interpolation" group and mute the previous 2.
This is just the application of some tips from this article with already available workflows.
This is not intended as tutorial on Hunyuan video, please check out the links above.
Description
Faster version using the Fast Hunyuan Video LoRA for the first step
FAQ
Comments (40)
well done. now someone needs to create a workflow to generate a full 1minute video!
getting wierd colour wall, type moving image , its mostly due to not setting some model right, can anyone help me with what I am doing wrong? Thank you
That happened to me when I tried using kijai models with default nodes and vice-versa. Make sure to use the regular models with this workflow: https://civitai.com/models/1018217/hunyuan-video-safetensors-now-offical-fp8.
@bonetrousers Thank you sir, will try this.
my low res video is wildly different than intermediate and latter.
That's due to the high denoise of the v2v step. Try decreasing it a bit, until the quality of the result starts to suffer. I haven't found a way to maintain both faithfulness to the original and quality.
thanks for the workflow and your reply!
I messed with denoise a bit but still having major discrepancies.
Think its something to do with the Lora not talking to intermediate V2V
@jacksonhoward Make sure that you place any additional lora before the "Set_Model" set node. That way it will be used in the second step.
The fast video lora is only applied in the first step, and any loras you attach there will not be applied later.
@bonetrousers thank you yes this fixed it!
Do you have the same errors when trying to load Fast Lora?
Loading LoRA: hyvideo_FastVideo_LoRA-fp8 with strength: 1.0
lora key not loaded: diffusion_model.double_blocks.0.img_attn.proj.diff_b
lora key not loaded: diffusion_model.double_blocks.0.img_attn.proj.lora_down.weight
lora key not loaded: diffusion_model.double_blocks.0.img_attn.proj.lora_up.weight
lora key not loaded: diffusion_model.double_blocks.0.img_attn.qkv.diff_b
lora key not loaded: diffusion_model.double_blocks.0.img_attn.qkv.lora_down.weight
lora key not loaded: diffusion_model.double_blocks.0.img_attn.qkv.lora_up.weight
lora key not loaded: diffusion_model.double_blocks.0.img_mlp.0.diff_b
lora key not loaded: diffusion_model.double_blocks.0.img_mlp.0.lora_down.weight
lora key not loaded: diffusion_model.double_blocks.0.img_mlp.0.lora_up.weight
I haven't ecountered that, but it seems like there is an incompatibility between the model and the lora.
Very good workflow considering we don't got img2vid yet, this is the closest thing I've seen to that. Thank u for sharing it : )
You're welcome! I've tried using an image as input with pretty bad results. Check out this workflow if you haven't already https://civitai.com/models/1046023/experimental-i2v. I might try to incorporate it with the 2 step concept.
@bonetrousers That would be great to see your workflow merged with that one, I'm struggling to get that one to work atm
@bhopping For me it used about 50 GB of ram.
When I load the workflow a popup says I'm missing node types. I'm new to ComfyUI and video generation so I'm a bit lost as to how I download these nodes.
I installed the Manager to ComfyUI but I couldn't find the nodes while browsing the Custom node list.
I just figured out what I was doing wrong with the Manager. I've got most of the stuff installed and enabled now but the "GetNode" and "SetNode" addons are still missing. Can someone tell me where I can find these addons?
Outstanding workflow. Best I've come across for Hunyuan. It rapidly kicks out short videos to give you a loose idea of what it will create. When happy, you simply toggle the 2nd and/or 3rd stages via a radio button which produce upscaled videos (2x at each stage). The result is a full HD video that looks fantastic. Highly recommended!
Hey, I'm sure you're already on top of this but just in case; adding TeaCache to this workflow cuts down time on the second pass significantly. You can see where I put it in here (not claiming that it is correct or optimized): https://civitai.com/images/50544845 Looking forward to your next iteration!
thanks I am trying to figure this out too but using your workflow im getting same time as original. is there something else I need to do besides load the node of teacache? thanks
Looks really promising! I'll fiddle around with it and incorporate it if I can.
@jacksonhoward I just reran with and without TC as a sanity check. Made sure to clear the cache, etc. It's taking my first two passes from 300 seconds down to around 225. If all of the nodes are installed properly and you are using comparable settings... I don't know
Everyone's speaking of this teacache and triton thing oblivious to the fact it's not a simple one click install, especially to those who have no knowledge with programming, I do wish someone would make a video on how to install triton on a windows PC. Their repo has the most vague explanation, they almost made it look like no layman should be able to understand those steps.
Great workflow, however because the first video is generated at such low quality, I noticed the eyes of the subject look really bad in the end image, specially when the face is larger. Any idea how to solve this without face swapping? Right now, the best way I found to solve it is by upping the width to 320 and height to 480, but it takes a lot longer to run the 2nd pass.
Unfortunately no great solutions, only compromises. You could try increasing the denoise of the second step without changing the resolutions, increase the initial resolution and decrease the upscaling factor accordingly, or reducing the strength of the fast lora and increasing steps proportionately.
Is it possible to use sageattention within this workflow? How to do it? I can't see inputs in sampler, like it kijai's wrapper.
Is there a way of using STG with this workflow?
I'm having this error in the second pass, in the sampler custom advanced node "The size of tensor a (7296) must match the size of tensor b (36936) at non-singleton dimension 1" Could it be that the latent upscale is generating dimensions hunyuan can't work with? If so, how can we check?
It says that Node #113 is missing, but if i check for missing nodes it doesnt show up. Cant find out what it is.
same here
That should be a "SetNode" from KJNodes: https://github.com/kijai/ComfyUI-KJNodes
@bonetrousers Any idea as to why ComfyUI automatically downloaded all the other missing nodes but didn't auto download the SetNode and GetNode nodes?
Edit: Oh nevermind, I just looked it up through the custom node manager, and there was a yellow warning icon by KJNode when I searched for it indicating that there are conflicts with existing nodes I have installed. I wish that ComfyUI would have given me a notification about this instead of me having to use my keyboard to type out what I think I might need.
Having issues with the error Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. and after a few cache clears, the process gets killed. Tried updating torch to no fix, have you seen anything like this before?
From what ive read its nothing to worry about in particular.
@colinw2292823 I did a restart one my computer, got 2 working Interpolates, but haven't gotten it to work since. Strange.
Figured it out. I was just running out of memory. For anyone else, I recommend taking it step by step by right clicking and using "Queue Group Output Node" instead of running the entire workflow from the main queue.
