CivArchive
    Image to Video (using AD/HV) - v1.0
    NSFW

    Image to Video (using HunYuanVideo)

    This workflow creates a video from still image.

    How to use

    (1) Prepare the original still image and load it to "Load Image" in the "Input" group.

    (2) Modify the text in the following parts to match the image you want to create.

     Auto-generates the prompt from the loaded image, but enter additional text to make it more similar to your image.

     If you are not sure, you can leave everything blank.

     (2-1) "Prompt(additionaal:common)" in the "Input" group;

      This is the entire prompt.

      You will provide a detailed explanation of the main object and a brief explanation of the background.

      For the main object, it is effective to enter keywords or actions that are not detected by automatic analysis.

       Example: Walking

      Regarding the background, do not write in details, but write roughly.

       Example: on the beach

      If you want a simple finished background, specify something like "white background".

     (2-2) "Prompt (Only Step 1)" in the "Input" group;

      Write the prompt for initial video creation (STEP 1).

      The initial video uses AnimationDiff, making it difficult to maintain detailed backgrounds.

      Here, please be intentionally vague so that the background is rough.

       Example: Light blue background

     (2-3) "Negative Prompt (Only Step 1)" in the "Input" group;

      Write a negative prompt.

      This prompt is only used during base video creation (STEP 1).

     (2-4) “Prompt (Only Step2)” in the “Input” group;

      Write the prompt for the detailed video (STEP2).

      Describe the contents you want to emphasize from the contents created in Step 1 and the specific contents that were not described in (2-1).

     (2-5) "Question" in "Analyze image";

      Enter appropriate questions regarding still image analysis.

     (2-6) "Exclude tags" in "Analyze image";

      If there are any tags that are automatically analyzed from still images that you do not want to reflect, please write them down.

    (3) Specify the parameters of the video to be created.

     (3-1) "Target video size (larger size of W or H)" in the "Input" group;

      Describe the size of the video to be generated.

      The aspect ratio of the generated video is determined based on the image loaded in (1).

      The size described here is recognized as the larger of the vertical and horizontal dimensions.

      Note: The vertical and horizontal sizes will be adjusted and generated to be a multiple of 16.

      The size of the generated video is displayed in "Width" and "Height" in "Step1".

     (3-2) "Frame step" in the "Input" group

      Specify the number of frames to generate.

     (3-3) “Frame rate” in the “Input” group

      Specify the number of frames per second.

      Note: The duration of the generated video is determined by (3-2) and (3-3).

      The number of seconds of video generated is displayed in "Video seconds".

     (3-4) "NoiseSeed" in the "Input" group

      Set the initial value of noise.

    (4) LoRA in Step 2

     LoRA can only be applied to Step2 (HunYuanVideo).

     Please set the LoRA you want to apply to "LoRA Stack JK (Only Step2)" in the "Step2" group.

    Description

    This workflow creates a video from still images.

    How to use

    (1) Prepare the original still image and load it to "Load Image" in the "Input" group.

    (2) Modify the text in the following parts to match the image you want to create.

    Auto-generates the prompt from the loaded image, but enter additional text to make it more similar to your image.

    If you are not sure, you can leave everything blank.

    (2-1) "Prompt(additionaal:common)" in the "Input" group;

    This is the entire prompt.

    You will provide a detailed explanation of the main object and a brief explanation of the background.

    For the main object, it is effective to enter keywords or actions that are not detected by automatic analysis.

    Example: Walking

    Regarding the background, do not write in details, but write roughly.

       Example: on the beach

    If you want a simple finished background, specify something like "white background".

    (2-2) "Prompt (Only Step 1)" in the "Input" group;

    Write the prompt for initial video creation (STEP 1).

    The initial video uses AnimationDiff, making it difficult to maintain detailed backgrounds.

    Here, please be intentionally vague so that the background is rough.

    Example: Light blue background

    (2-3) "Negative Prompt (Only Step 1)" in the "Input" group;

    Write a negative prompt.

    This prompt is only used during base video creation (STEP 1).

    (2-4) “Prompt (Only Step2)” in the “Input” group;

    Write the prompt for the detailed video (STEP2).

      Describe the contents you want to emphasize from the contents created in Step 1 and the specific contents that were not described in (2-1).

    (2-5) "Question" in "Analyze image";

      Enter appropriate questions regarding still image analysis.

    (2-6) "Exclude tags" in "Analyze image";

    If there are any tags that are automatically analyzed from still images that you do not want to reflect, please write them down.

    (3) Specify the parameters of the video to be created.

    (3-1) "Target video size (larger size of W or H)" in the "Input" group;

    Describe the size of the video to be generated.

    The aspect ratio of the generated video is determined based on the image loaded in (1).

    The size described here is recognized as the larger of the vertical and horizontal dimensions.

    Note: The vertical and horizontal sizes will be adjusted and generated to be a multiple of 16.

    The size of the generated video is displayed in "Width" and "Height" in "Step2".

    (3-2) "Frame step" in the "Input" group

    Specify the number of frames to generate.

    (3-3) “Frame rate” in the “Input” group

    Specify the number of frames per second.

    Note: The duration of the generated video is

    (3-4) "NoiseSeed" in the "Input" group

    Set the initial value of noise.

    (4) LoRA in Step 2

    LoRA can only be applied to Step2 (HunYuanVideo).

    Please set the LoRA you want to apply to "LoRA Stack JK (Only Step2)" in the "Step2" group.

    P.S:

    I am running it in the following environment.

    Operation has not been confirmed in any other environment.

     CPU: Intel Core i7-13700KF (not tuned) Mem:64GB

    GPU: NVIDIA GeForce RTX 4080Super MEM:16GB

     OS: Windows11pro 24H2

    FAQ

    Comments (8)

    sundeveloper777311Jan 29, 2025
    CivitAI

    What is this? :))))

    makiaeveliJan 29, 2025

    Good point, it's kinda big for just a workflow. Maybe it includes sample images? One thing I learned is there is a large text blob above the author's name and below the download buttons, and you can click view more and he's got a huge explanation. I don't really care for AnimateDiff, but if you wanted to use it with Hunyuan video this might be a modern rendition

    2885872Jan 29, 2025

    Yeah...I hate to complain...but the schematics of the Death Star were better documented than this. Just sayin'.....

    flufflepimpJan 29, 2025
    CivitAI

    What Clip vision model are you loading into the IPAdapter ADvanced?

    extractorse123Jan 29, 2025· 1 reaction

    I got it working using CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors linked from here:
    https://github.com/cubiq/ComfyUI_IPAdapter_plus

    AsetoEir
    Author
    Jan 31, 2025

    I also use "CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors".

    However, the filename remains the same as the downloaded one. (Not renamed)

    wallace9873Oct 25, 2025
    CivitAI

    https://drive.google.com/file/d/1tNgGP7ln5bNHT-R9Y35m9cxpG7FLiTPQ/view?usp=sharing

    I fixed all missing modules and finally can run, but I stop at here "comfyui-animatediff-evolved" (see attachment). may I know how to fix? I have tried to install or update in confyui manger but still fail. thank you very much.

    AsetoEir
    Author
    Nov 10, 2025

    I can't answer your question because I'm no longer using this workflow or the nodes used here. I've experienced some workflows no longer working after updating ComfyUI and each module, so this may be related to the issue.

    Workflows
    Hunyuan Video

    Details

    Downloads
    978
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/29/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    imageToVideoUsingADHV_v10.zip

    Mirrors

    Huggingface (1 mirrors)