Welcome to my π«π¦ Friendly LTX-2 T2V+I2V+Lipsync
β¨ Less mess, more magic
UniVibe - Lipsync all-in one version with HQ TTS VibeVoice model is released.
New v1.2 with simplified model loading, with quality and perfomance improvements.
LTX-2 is a new video generation model with 19b parameters under the hood. This is the first DiT-based (Diffusion Transformer) foundation model that generates synchronized audio and video simultaneously in a single pass! It supports native 4K resolution at up to 50 FPS, providing cinematic-grade fidelity suitable for professional VFX and film production and it is capable of generating clips up to 10β20 seconds with consistent style and motion.
π» System requirements:
Minimum system requirements for 540p i2v and 720p t2v:
RTX 3000-s, 8GB+ VRAM, 45GB+ RAM, 8-core processor, SSD, latest ComfyUI
π Low VRAM optional optimization:
For systems with low VRAM use --reserve-vram ComfyUI parameter in run_nvidia_gpu.bat:Β
--reserve-vram 4Β (or other number in GB).
π Detailed tips and links to models in the workflow
β¨ Workflow features:
Extremely user-friendly interface
Maximum performance and optimization from 8GB of VRAM: GGUF or 8-step distilled model with fp4 or fp8 text encoder + MultiGPU memory optimization
All-in-one: i2v, t2v, and interpolation
Convenient one-click mode switching
Generation time setting in seconds
Lora support (up to 3)
Detailed tips and links to all necessary models
Manual random seed for complete control over generations
π€ππΌ Thanks to Lightricks Team
Original repo β GitHub
Description
One of the best LTX-2 workflows becomes simply the best friendly workflow
Simplified model loading system. Choose the mode you need: Dev, Distilled or GGUF. Now you can combine Dev Model with GGUF Clip and vice versa
Optimizations for better quality and perfomance
All important nodes have been updated
Bugs and errors have been fixed
Updated detailed tips in the workflow
Latest ComfyUI and KJ nodes required