Hunyuan Video
Kijai marked files only for use with Kijai Nodes You do not need them for Comfy Native
Full Guide to picking the correct file above
Workflow for 8GB Card users
Uncensored llama will work with COMFY Native
Using the Kajai marked models on COMFY native will cause rainbow or black output.
I do not recommend the the FP8 VAE unless you are trying to fit all models into GPU, see the guide for 4090 full GPU launch commands.
Technical details regarding "Uncensored"
The model used for Hunyuan was based on llava-llama-3 8 billion parameter LLM. The Intel vision tuned model was used to refine the tokenized model restoring over 5 million values.
Description
Hunyuan Video
Note Models marked as "Kijai" have the full vision model and blocks - They only work with the Kijai Nodes
Use Comfy Native models for Comfy native nodes
Converted to safetensors.
Comfy Native Node Users do not use Kajai TE use the Scaled Version
For CLIP-L they recommend using the full vision model. The BF16 version I uploaded will work with Comfy native or Kijai
GBlue has posted a working FP8 workflow using native COMFY nodes on a 12GB card.
Using the full vision models on COMFY native will cause rainbow or black output.
I have posed a FP8 VAE that works with Comfy Native but may take more time then BF16/FP32
