An experimental I2V workflow combining LTXV and HunyuanVideo
Since HunyuanVideo doesn't support I2V yet, this is my approach to get similar results
A new node that comes with the HunyuanVideoWrapper can do a sort of IPadapter, but this is not exactly what I wanted, a video that looks like the input image.
Description
FAQ
Comments (19)
so it's an image to LTX video to HunYuan video, the results looks decent
is it possible to make longer videos with this workflow and still maintain character consistency?
I got a problem with LTXV model clip model loder
it says
"Incorrect path_or_model_id: 'C:\Users\Master\Desktop\new\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\text_encoder'. Please provide either the path to a local folder or the repo_id of a model on the Hub."
Show Report
Find Issues
how can I fix it?
Hunyuan makes the video blurrier/smooths skin no matter the settings, LTXV doesn't obey prompts, this is frustrating.
Followed your initial settings, using newest version of LTXV and PixArt for the CLIP, played with samplers/resolutions/etc the results are always subpar, unresponsive, and inflexible
Also, fixing the LTXV node for some reason lets it regenerate one more time after, which defeats the purpose of "picking out the best motion" (EDIT: I guess you meant to fix the node and then alter the prompt add-ins/other parameters)
Also, sometimes the ONE-VISIONREAD output is inaccurate, but all you can do is append to it or change the prompt instead of being able to edit it
4090, 32GB RAM, not sure how you got the quality of results you did
Oh I really want this but I'm just starting. I do have ComfyUI and Custom Node Manager installed and running. Now, I just need to know what to do next.
Can you add the workflow.json since the zip only contains a jpg? Thanks very much <3
can it work with hunyuan guff or fp8 distilled, fasthunyuan?
why is the workflow in the images different from the one that we can download?
Flash Attention 2 has been logged on, but it cannot be used due to the following error: the package flash_attn seems to be not installed. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2. Which video card models does your process work with? or which nodes can be replaced with Flash Attention 2?
During the hunyuan process, I get an error saying, "HyVideoSampler
HyVideoSampler.process() got an unexpected keyword argument 'feta_args'"
EDIT: I got the workflow working, what did it for me is I downloaded IP-Adapter (I didn't have it, tbh I still don't know if I need it). What I think fixed my issue is I closed down comfy entirely, then I went into the folder: "\ComfyUI_windows_portable\update\" and started 'update_comfyui_and_python_dependencies'. Let that run for a few minutes, and it's working!
Issue I had:
After changing the attention mode to sdpa for ltx and HunyuanVideo , (could'nt get others working), and fixing the feta_args node issue; the initial LTX video is generated fine but the HunyuanVideo is just a black screen. I left all other settings at default. No idea what's causing the issue.
Edit: I'm using an rtx 3090 24gb
awesome work! can you share old prompt for llava?
I got this error in the HunyuanVideo TextEncode stage: !!! Exception during processing !!! list index out of range.
What is the reason?
Is it possible to have the (Down)Load HuanyunVideo Text Encoder point to my other encoders? I've tried putting the ones I have in the LLM folder, clip folder, text encoder folder and they do not appear. Refreshed, moved them around, nothing seems to help. Is there a path in the node that I can some how change so they will appear?
I am running on 8gb so I am trying to use different encoders so I don't OOM.
Thank you, looking forward to giving this a try if I am able to.
someone can teach me how to install Flash Attention 2 please? i have amd, im having this issue: the package flash_attn seems to be not installed. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2
i'm getting an error when it reaches the HunYuan Video VAE Loader node:
HyVideoVAELoader
Error(s) in loading state_dict for AutoencoderKLCausal3D: Missing key(s) in state_dict: "encoder.down_blocks.0.resnets.0.norm1.weight", "encoder.down_blocks.0.resnets.0.norm1.bias", "encoder.down_blocks.0.resnets.0.conv1.conv.weight", "encoder.down_blocks.0.resnets.0.conv1.conv.bias", "encoder.down_blocks.0.resnets.0.norm2.weight", "encoder.down_blocks.0.resnets.0.norm2.bias", "encoder.down_blocks.0.resnets.0.conv2.conv.weight", "encoder.down_blocks.0.resnets.0.conv2.conv.bias", "encoder.down_blocks.0.resnets.1.norm1.weight", "encoder.down_blocks.0.resnets.1.norm1.bias", "encoder.down_blocks.0.resnets.1.conv1.conv.weight",
it's a really long error so i'm not going to paste the whole thing, but I've not changed any settings and just used the workflow how it was originally set
HyVideoModelLoader
'img_in.proj.weight'
Error. Any idea what is going on?
Where do i increase the frames for the video length?
Hi, will this workflow be updated to support LTX/Hunyuan node changes? This doesn't seem to work with the latest changes to both.
Looks like we don't have an active mirror for this file right now.
CivArchive is a community-maintained index — we catalog mirrors that volunteers upload to HuggingFace, torrents, and other public hosts. Looks like no one has uploaded a copy of this file yet.
Some files do get recovered over time through contributions. If you're looking for this one, feel free to ask in Discord, or help preserve it if you have a copy.