Hotfix 2/4/25 - Culled unused group nodes. This should reduce the required dependencies when installing.
READ ABOUT THIS VERSION |------------>
Feedback is appreciated, but if you're rude or uninformed, you will simply be ignored or blocked.
GGUF:
https://huggingface.co/city96/HunyuanVideo-gguf/tree/main
Fast Video Lora:
https://huggingface.co/Kijai/HunyuanVideo_comfy/tree/main
Pretty much everything else can be found on Civitai by sorting the model category by Hunyuan Video.
Some Comfy extensions were used, just click install missing and it should install them. If you run into any errors run update all, uninstall broken nodes and try installing them again, the comfyui manager is not perfect.
There's an issue with comfy where sometimes after installing new nodes, they'll show as missing in the workflow. If this happens, drag the .json file into comfy again and it should load them this time.
Description
Plans for future implementations -
i2v
v2v
more checkpoint types
FAQ
Comments (15)
Looks like a nice workflow. I really prefer simple ones like this that's easy to understand and follow for those of us not crazy proficient in comfyui. I do have one question for you. I noticed your workflow as well as many others don't have option for selecting sage attention for processing. I found in a very old workflow i was using with this model and spending a good deal of time figuring out how to install it, it helped speed up generation significantly. Wondering if you found it not worth it? not tried it?
First time hearing of it. I'll give it a look, thanks for the feedback.
What does "broke" and "rich" means for you ? VAE Tile Decoder should always be used for not running OOM I guess?!
Yes, run tile vae decode if you run into oom error. Broke = low vram, rich = high vram
workflow hunyuan gguf loader is missing!
You are using in your workflow 175 tokens.
This is complete useless as the model is imited to accept only 77 tokens.
Appreciate a complete working workflow .. I am very new to comfyui and hunyuan but love the speed of t2v generation on low vram (I presume means 8 VRAM is ok).
SamplerCustomAdvanced
too many values to unpack (expected 4)
Then it stops.
Any help will be appreciated
This workflow has been consistently giving good results for me. I have a small setup with 12GB VRAM, an i5 CPU and 16GB system RAM, and i get 97 frames/30 steps in less than 20 minutes, using the 4-bit GGUF model and loading 2 loras.
I'm new to comfyUI and don't know enough to add i2v and v2v to this workflow, hope to see them soon.
Thanks a lot! :-)
I love this. But I wonder how this can work without a connection between the nodes??
Anything everywhere nodes. If you check under the nodes, you'll find the hidden connectors. The workflow I'm working on to replace this one won't have nodes hidden underneath, they'll be integrated in grouped nodes.
I don't know what I'm doing wrong but I can't see where it is saving the final mp4 file, isn't in output default folder. help
Default location is output/Hunyuan/videos/30/ you can change this in the filename_prefix value of the Video Combine node if you want to save it some place else. They won't show up in the comfyui outputs browser, probably because of the subfolders.
Voidtools Everything. First thing to install on Windows. No more "I don't know where are my files" problem. (Download the alpha from their official forum. It has a dark mode, among other new things. https://www.voidtools.com/forum/viewtopic.php?t=9787)
@SD_AI_2025 I can't explain why but It wasn't save at all. I restarted the comfyui and it solve the issue. It only happened at the first time using this workflow, all other times that I used was fine... Probably a bug in my comfyui that day. But thanks for the tip! I'll check this voidtools!
