CivArchive
    Hunyuan 2step t2v and upscale - v1.5
    NSFW
    Preview 49693487

    Workflow for Hunyuan video that can generate a small resolution video first very quickly, then upscales it with Hunyuan v2v when you find one you like. There is a third step for upscaling and video interpolation.

    Version 1.5 uses the fast video lora to generate the first video in 7 steps, significantly increasing the speed of the first generation without compromising the second.

    Version 1.6 uses a TeaCache sampler to increase generation speed by 1.6 and optionally by 2.1 with worse quality.

    Version 1.7 adds Wavespeed, which has increased speed for me by about 15 %. To use it you will need to clone the wavespeed repo in custom nodes. Some wavespeed functionality requires installing triton, but if you only use the "Apply first block cache" node you may not need it.

    If you already have a video you simply want to upscale, you can connect the muted load video node to the top left connection in the "Upscale and Interpolation" group and mute the previous 2.

    This is just the application of some tips from this article with already available workflows.

    This is not intended as tutorial on Hunyuan video, please check out the links above.

    Description

    Faster version using the Fast Hunyuan Video LoRA for the first step

    FAQ

    Comments (40)

    azeliJan 5, 2025· 4 reactions
    CivitAI

    well done. now someone needs to create a workflow to generate a full 1minute video!

    snap2887Jan 5, 2025
    CivitAI

    getting wierd colour wall, type moving image , its mostly due to not setting some model right, can anyone help me with what I am doing wrong? Thank you

    bonetrousers
    Author
    Jan 5, 2025

    That happened to me when I tried using kijai models with default nodes and vice-versa. Make sure to use the regular models with this workflow: https://civitai.com/models/1018217/hunyuan-video-safetensors-now-offical-fp8.

    snap2887Jan 5, 2025

    @bonetrousers Thank you sir, will try this.

    jacksonhowardJan 5, 2025
    CivitAI

    my low res video is wildly different than intermediate and latter.

    bonetrousers
    Author
    Jan 5, 2025

    That's due to the high denoise of the v2v step. Try decreasing it a bit, until the quality of the result starts to suffer. I haven't found a way to maintain both faithfulness to the original and quality.

    jacksonhowardJan 5, 2025

    thanks for the workflow and your reply!
    I messed with denoise a bit but still having major discrepancies.

    Think its something to do with the Lora not talking to intermediate V2V

    bonetrousers
    Author
    Jan 5, 2025· 2 reactions

    @jacksonhoward Make sure that you place any additional lora before the "Set_Model" set node. That way it will be used in the second step.

    The fast video lora is only applied in the first step, and any loras you attach there will not be applied later.

    jacksonhowardJan 6, 2025

    @bonetrousers thank you yes this fixed it!

    guy33Jan 5, 2025
    CivitAI

    Do you have the same errors when trying to load Fast Lora?

    Loading LoRA: hyvideo_FastVideo_LoRA-fp8 with strength: 1.0

    lora key not loaded: diffusion_model.double_blocks.0.img_attn.proj.diff_b

    lora key not loaded: diffusion_model.double_blocks.0.img_attn.proj.lora_down.weight

    lora key not loaded: diffusion_model.double_blocks.0.img_attn.proj.lora_up.weight

    lora key not loaded: diffusion_model.double_blocks.0.img_attn.qkv.diff_b

    lora key not loaded: diffusion_model.double_blocks.0.img_attn.qkv.lora_down.weight

    lora key not loaded: diffusion_model.double_blocks.0.img_attn.qkv.lora_up.weight

    lora key not loaded: diffusion_model.double_blocks.0.img_mlp.0.diff_b

    lora key not loaded: diffusion_model.double_blocks.0.img_mlp.0.lora_down.weight

    lora key not loaded: diffusion_model.double_blocks.0.img_mlp.0.lora_up.weight

    bonetrousers
    Author
    Jan 5, 2025

    I haven't ecountered that, but it seems like there is an incompatibility between the model and the lora.

    bhoppingJan 7, 2025
    CivitAI

    Very good workflow considering we don't got img2vid yet, this is the closest thing I've seen to that. Thank u for sharing it : )

    bonetrousers
    Author
    Jan 7, 2025

    You're welcome! I've tried using an image as input with pretty bad results. Check out this workflow if you haven't already https://civitai.com/models/1046023/experimental-i2v. I might try to incorporate it with the 2 step concept.

    bhoppingJan 7, 2025

    @bonetrousers That would be great to see your workflow merged with that one, I'm struggling to get that one to work atm

    bonetrousers
    Author
    Jan 7, 2025

    @bhopping For me it used about 50 GB of ram.

    cahofe2059171Jan 8, 2025
    CivitAI

    When I load the workflow a popup says I'm missing node types. I'm new to ComfyUI and video generation so I'm a bit lost as to how I download these nodes.

    I installed the Manager to ComfyUI but I couldn't find the nodes while browsing the Custom node list.

    cahofe2059171Jan 8, 2025

    I just figured out what I was doing wrong with the Manager. I've got most of the stuff installed and enabled now but the "GetNode" and "SetNode" addons are still missing. Can someone tell me where I can find these addons?

    is125699Jan 8, 2025· 2 reactions
    SteveWarnerJan 8, 2025· 5 reactions
    CivitAI

    Outstanding workflow. Best I've come across for Hunyuan. It rapidly kicks out short videos to give you a loose idea of what it will create. When happy, you simply toggle the 2nd and/or 3rd stages via a radio button which produce upscaled videos (2x at each stage). The result is a full HD video that looks fantastic. Highly recommended!

    FlexabilityJan 9, 2025
    CivitAI

    Hey, I'm sure you're already on top of this but just in case; adding TeaCache to this workflow cuts down time on the second pass significantly. You can see where I put it in here (not claiming that it is correct or optimized): https://civitai.com/images/50544845 Looking forward to your next iteration!

    jacksonhowardJan 9, 2025

    thanks I am trying to figure this out too but using your workflow im getting same time as original. is there something else I need to do besides load the node of teacache? thanks

    bonetrousers
    Author
    Jan 9, 2025

    Looks really promising! I'll fiddle around with it and incorporate it if I can.

    FlexabilityJan 9, 2025· 2 reactions

    @jacksonhoward I just reran with and without TC as a sanity check. Made sure to clear the cache, etc. It's taking my first two passes from 300 seconds down to around 225. If all of the nodes are installed properly and you are using comparable settings... I don't know

    kakkkarotJan 10, 2025

    Everyone's speaking of this teacache and triton thing oblivious to the fact it's not a simple one click install, especially to those who have no knowledge with programming, I do wish someone would make a video on how to install triton on a windows PC. Their repo has the most vague explanation, they almost made it look like no layman should be able to understand those steps.

    11879Jan 9, 2025
    CivitAI

    Great workflow, however because the first video is generated at such low quality, I noticed the eyes of the subject look really bad in the end image, specially when the face is larger. Any idea how to solve this without face swapping? Right now, the best way I found to solve it is by upping the width to 320 and height to 480, but it takes a lot longer to run the 2nd pass.

    bonetrousers
    Author
    Jan 10, 2025· 1 reaction

    Unfortunately no great solutions, only compromises. You could try increasing the denoise of the second step without changing the resolutions, increase the initial resolution and decrease the upscaling factor accordingly, or reducing the strength of the fast lora and increasing steps proportionately.

    PATATAJECJan 9, 2025
    CivitAI

    Is it possible to use sageattention within this workflow? How to do it? I can't see inputs in sampler, like it kijai's wrapper.

    PATATAJECJan 9, 2025· 1 reaction
    CivitAI

    Is there a way of using STG with this workflow?

    juanml82Jan 9, 2025
    CivitAI

    I'm having this error in the second pass, in the sampler custom advanced node "The size of tensor a (7296) must match the size of tensor b (36936) at non-singleton dimension 1" Could it be that the latent upscale is generating dimensions hunyuan can't work with? If so, how can we check?

    azeliJan 11, 2025

    Did you change the multiplier from 2?

    juanml82Jan 11, 2025

    @azeli I tried with 2, it didn't work, I tried with other multipliers, it didn't work either

    juanml82Jan 12, 2025

    Well, I'm tring the 1.6 version of your workflow and is now working. Thanks!

    GnomeHunterxJan 10, 2025
    CivitAI

    It says that Node #113 is missing, but if i check for missing nodes it doesnt show up. Cant find out what it is.

    6684011Jan 10, 2025

    same here

    bonetrousers
    Author
    Jan 10, 2025· 2 reactions

    That should be a "SetNode" from KJNodes: https://github.com/kijai/ComfyUI-KJNodes

    skyeveJan 17, 2025· 2 reactions

    @bonetrousers Any idea as to why ComfyUI automatically downloaded all the other missing nodes but didn't auto download the SetNode and GetNode nodes?

    Edit: Oh nevermind, I just looked it up through the custom node manager, and there was a yellow warning icon by KJNode when I searched for it indicating that there are conflicts with existing nodes I have installed. I wish that ComfyUI would have given me a notification about this instead of me having to use my keyboard to type out what I think I might need.

    srsparky31956Jan 10, 2025
    CivitAI

    Having issues with the error Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. and after a few cache clears, the process gets killed. Tried updating torch to no fix, have you seen anything like this before?

    colinw2292823Jan 11, 2025

    From what ive read its nothing to worry about in particular.

    srsparky31956Jan 11, 2025

    @colinw2292823 I did a restart one my computer, got 2 working Interpolates, but haven't gotten it to work since. Strange.

    srsparky31956Jan 13, 2025· 1 reaction

    Figured it out. I was just running out of memory. For anyone else, I recommend taking it step by step by right clicking and using "Queue Group Output Node" instead of running the entire workflow from the main queue.

    Workflows
    Hunyuan Video

    Details

    Downloads
    4,043
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/5/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    hunyuan2stepT2vAnd_v15.zip

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)