CivArchive
    Hunyuan I2V gguf workflows - v1.0
    NSFW
    Preview 62480917

    Recently, HyVideo released an official I2V model along with a quantized version, so I updated the I2V workflow as well. This is a basic process using GGUF, nothing special.

    Since I have 12GB of VRAM, I used Q4, but if your VRAM is sufficient, I recommend using Q5 or higher. For better quality, increase the number of steps and resolution.

    PS:Wavespeed/Teacache is not required. If you can't use it, disable the node (CTRL+B).

    use note:

    1、GGUF download link

    2、Other model download link

    llava_llama3_vision>..\models\clip_vision\

    gguf>..\models\unet

    skyreel smooth lora download link

    hunyuan video fast lora download link

    If you like this model, please 👍 it and leave a review! Also, feel free to give me a ⚡, it would be greatly appreciated.If you don't like it, still let me know why so that I can improve!

    ※ I using Forge or Comyfui to gen. If your results don't match mine exactly, this may explain why.My LORA may not work on certain checkpoints. If that's the case, please switch to a different checkpoint.

    Description

    FAQ

    Comments (12)

    3427221Mar 9, 2025
    CivitAI

    Something is bothering me, I just tested your workflow, I put 960*640 resolution, 49 frames, same image, same prompt, 19.5 to 20.5 GB Vram used, I try another one, 15.5GB Vram used (same prompt, only seed is changing, same image same resolution same frames lenght) then I put 89 frames, same prompt same image, and 17 to 18GB Vram used... I don't understand how this is working, it don't make any sense

    TTangSlgy
    Author
    Mar 9, 2025

    In my sample, I used 480x720, and the two slightly blurry videos later were 368x576. At the same time, I only set the length from 33 to 41, with 10 to 20 steps. Even with these settings, my VRAM usage reaches 10-11GB. Therefore, I only used the Q4 GGUF. If you experience an OOM error, please reduce the resolution or lower the frame count. By the way, adding LORA will also consume more VRAM (faster LORA).

    3427221Mar 9, 2025

    @TTangSlgy no, not OOM, but enormous difference of Vram used between two similar video (3 to 5 GB Vram difference, with same setting, only seed changing) but other than that it work.

    TTangSlgy
    Author
    Mar 10, 2025

    @NoArtifact  I misunderstood your point. You mean that two identical settings are using different amounts of VRAM? There could be several reasons for this:

    ComfyUI might not have cleared the previous model. You can click "Unload Model" in the menu to manually clear it.

    Check if other workflows or programs are running at the same time.

    Unused nodes might still be active, causing unnecessary model loading.

    It could be a bug in an older version—please try updating ComfyUI.

    On my device, I sometimes encounter OOM when generating for the second time because the model remains in memory. This might be a bug in the newer versions of ComfyUI, but the issue disappears after I manually unload the model.

    3427221Mar 10, 2025

    @TTangSlgy Yes it's probably one or few of the things you mentioned, anyway it's not a big problem just I was not really getting why, but you are probably right with things getting unloaded and remaining in extra memory probably the issue.

    pheonisMar 10, 2025· 6 reactions
    CivitAI

    Thanks for the workflow, I tried it and gtting completely different video from thw original image. Its like the video took inspiration fro the image and generated something different

    jwp7181603Apr 2, 2025

    Exactly the same issue here.

    3481598Mar 10, 2025
    CivitAI

    Getting error "SingleStreamBlock.forward() got an unexpected keyword argument 'modulation_dims_img'" in the sampler step. Any idea what that's about?

    TTangSlgy
    Author
    Mar 11, 2025

    You may need to update Wavespeed/Teacache. If the error persists, you can try disabling them. At least on my device, they are working fine.

    3481598Mar 11, 2025

    @TTangSlgy Updating didn't work but disabling wavespeed worked. thanks

    3481598Mar 11, 2025

    @TTangSlgy Do you know if theres a way to increase the motion? Im finding the results have very minimal movement.

    RedditUser981May 20, 2025
    CivitAI

    still trying on my rtx 4050 6gb vram i think its too slow

    Workflows
    Hunyuan Video

    Details

    Downloads
    1,500
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/9/2025
    Updated
    5/14/2026
    Deleted
    -

    Files

    hunyuanI2VGguf_v10.zip

    Mirrors