Update 2026-05-13: LTX 2.3 All-in-One v3.0 workflow published
VideoFlow LTX 2.3 distilled 1.1 All-in-One v3.0
New Features:
Text-to-Video Support with a pre-configured set of LoRAs for creating photorealistic videos.
Image-to-Video Support for both first and last frame.
Optional Audio Integration (audio-to-video) with the ability to extract voice from recordings (file or recorded clip) to remove background noise.
Consistent Character Voice through voice cloning with just a 5-second reference audio (file or recorded clip).
Video Filters for adjusting brightness, contrast, saturation, sharpness, blur, and enhancing edges and details.
Film grain for a cinematic or analog effect.
50fps Support via frame interpolation.
Improvements:
Now using LTX 2.3 distilled 1.1 resulting in better emotions, movements and audio.
Faster, less memory-intensive color correction.
More explanations and guidance integrated.
Fixes:
Audio and video are now always perfectly in sync.
Resolution input (video dimensions) for image-to-video generation now works properly.
Update 2026-04-14: LTX 2.3 I2V workflow updated
VideoFlow LTX 2.3 distilled I2V v2.0
VideoFlow 2.0 is here, bringing major performance upgrades, better quality, and more flexibility to your workflow.
Key Improvements:
Much Faster Generation: Thanks to improved samplers and schedulers, videos generate approx. twice as fast compared to Version 1.0.
Higher Quality Output: Despite the speed boost, image quality, audio quality, and prompt adherence have all been significantly improved.
Flexible Model Support: You can now freely choose between multiple model types:
Checkpoint
GGUF UNet
Diffusion model
Optimized for Low VRAM Systems: With GGUF support, VideoFlow now runs much more efficiently on systems with limited GPU memory.
Optional Sampler Preview: Disable the sampler preview to further reduce generation time.
Improved Usability: Additional guidance and hint texts help you get the most out of the workflow.
Update 2026-03-15: LTX 2.3 I2V workflow added
VideoFlow LTX 2.3 distilled I2V v1.0
This workflow provides an easy-to-use image-to-video solution for LTX 2.3, designed to work seamlessly with the distilled LoRA model. It focuses on high-quality, realistic output, with the first-stage scheduler's sigma values finely optimized for best performance.
Subgraphs are used to keep the main workflow streamlined and easy to navigate. A live preview is displayed during generation, allowing you to monitor progress and stop the process early if desired. Additionally, the first-stage video can be decoded for quick previewing. This feature lets you watch a lower-resolution version of the final video and cancel immediately if the result doesn’t meet expectations.
As the distilled LoRA already delivers impressive quality in the first stage, you can skip the second stage entirely if your hardware has limited performance. An optional color-correction node is included to compensate for LTX’s tendency to introduce subtle color and lighting shifts, ensuring consistent visual quality.
Update 2025-08-24: Wan 2.2 I2V workflow added
VideoFlow Wan 2.2 I2V v1.0
VideoFlow is now fully optimized for Wan 2.2. It supports resolutions from 480p up to 720p, with the option to upscale smoothly to 1440p at 32fps. The process is accelerated by integrating Lightning LoRA during the final two-thirds of the generation steps, ensuring faster results without compromising quality. Importantly, Lightning LoRA does not influence the initial generation steps, preserving natural and fluid movements throughout the video. SageAttention with Triton is supported but not required. Instructions on how to set up and use the workflow are included within the workflow itself.
VideoFlow Wan 2.1 I2V v1.0
This image-to-video workflow is designed to generate smooth, realistic videos at 32 fps with a strong emphasis on fast, high-resolution output. At least 16 GB of VRAM is recommended for optimal performance. For additional speed improvements, you may also install SageAttention and Triton, though these are optional.
It's fast 🚀!
Sample videos were rendered at 768 × 1152 resolution and 16 fps, consisting of 81 frames, each video taking about 6 minutes to generate. The upscaling and frame interpolation to 1536 × 2304 resolution and 32 fps took approximately another 6 minutes on an RTX 4080 with 16 GB VRAM. Lower resolutions render even faster.
Key configuration for the sample videos:
Video model: Wan2.1 SkyReels V2 I2V 14B 720P
LoRA: Lightx2v
Steps: 4
Sampler: dpmpp_sde_gpu
Scheduler: beta
💡Comprehensive usage details and instructions are provided within the workflow itself.
Sample images for input were created with my PhotoFlow workflow.
The download of the workflow contains all sample videos, including the input image with its own workflow, the initial generated video and its upscaled counterpart, allowing for convenient side-by-side comparison.
Leave a 👍 if you like the workflow 🙂.
Description
This workflow provides an easy-to-use image-to-video solution for LTX 2.3, designed to work seamlessly with the distilled LoRA model. It focuses on high-quality, realistic output, with the first-stage scheduler's sigma values finely optimized for best performance.
Subgraphs are used to keep the main workflow streamlined and easy to navigate. A live preview is displayed during generation, allowing you to monitor progress and stop the process early if desired. Additionally, the first-stage video can be decoded for quick previewing. This feature lets you watch a lower-resolution version of the final video and cancel immediately if the result doesn’t meet expectations.
As the distilled LoRA already delivers impressive quality in the first stage, you can skip the second stage entirely if your hardware has limited performance. An optional color-correction node is included to compensate for LTX’s tendency to introduce subtle color and lighting shifts, ensuring consistent visual quality.
FAQ
Comments (25)
OMG 😍 thnx a lot for video workflow
Thanks for the workflow! Works pretty well straight out-of-the-box. As usual I am very grateful for your detailed notes about where to get and save the files, and the usage of the workflow.
Question: have you tried using any other loras with this? Any tips about mixing loras and lora strength for LTX2.3? I tried the SexGod lora https://civitai.com/models/2308157?modelVersionId=2773429 and it created some ugly output.
Actually, I haven't tried any LoRA yet.
Awesome workflow❤️
For anyone running the LTX 2.3 v1.0 workflow, please note that inside the "Load Models" subgraph the VIDEO VAE is not connected to the output. So you've got to make that connection for the workflow to work.
I just double-checked that and can't confirm it. Otherwise, others would have reported it already. I'm using ComfyUI v0.17.2.
EDIT: 11 hours ago v0.18.0 was released. I updated and now can confirm the issue. So something changed in ComfyUI or the comfyui-kjnodes breaking the workflow.
I updaded the workflow accordingly. The file was replaced. It works in ComfyUI v0.17.2 and v0.18.0.
@ai839 Thank you!
For some reason the workflow with the minor bug was also rather slow for me on my 4090 (30-60 minutes per render with the default settings) but I'm sure that that nothing to do with the workflow itself but instead with certain dependencies in my own setup.
It's a very nice workflow and I will definately circle back to it next chance I get, so thank you very much :) !
Wow, a comfy workflow for a new model that actually worked on the first try. That's gotta be a first.
Another amazing Workflow that works on the first try, thank you very much!
Hi, could you use two reference images in the video? I keep seeing a Batman appear out of nowhere lol
Are you referring to the Wan examples? All the workflows start with only one image as the initial frame. The Batman example is purely prompt-based ;).
However, I've also experimented with using multiple reference images in LTX 2.3. For instance, the LTXVImgToVideoInplaceKJ node allows you to add multiple references and specify which frames they correspond to, for example, the first and last frame.
When I used both the first and last frame, the final second of the video always turned out messy, so you need to trim it off.
I have 5060Ti 16GB. keep running into OOM at Stage1 Sampler. reduce image size to 1 MP does not help. not sure what's wrong with my set up.
Hi, I made some tweaks in the workflow and posted a demo video: https://civitai.com/images/125268972
The workflow is embedded in the video. Just drag and drop it into ComfyUI. I'd really appreciate it if you could give it a try, or if anyone else reading this who has 16 GB of VRAM could do so. Unfortunately, I can't test it myself.
How much RAM do you have?
I have 32GB RAM. your demo video works. works all the way. took about 12.5 mins with my set up. Thank you. This demo is 1.0mp and 5 seconds long. I am going to try longer clip next.
I was having a similar problem until I ran ComfyUI with --reserve-vram 5, then I got it to work (with both this and the low VRAM workflow from the video).
@grfx9432 Thank you very much for sharing this information!
https://civitai.com/models/2448150/ltx-23?modelVersionId=2753250
I guess that checkpoint Lora works for making ltx videos. Or am I assuming wrong? If I'm correct, does your workflow work to make videos with that distilled version lora checkpoint?
Could you explain to me creator?
Yes, the workflow uses this lora. The recommended download link and filename differ, but it is the same. If you use the distilled checkpoint, the lora is already merged into it and you have to bypass the lora node in the subgraph.
having a little trouble with this.
RuntimeError: split_with_sizes expects split_sizes to sum exactly to 7680 (input tensor's size at dimension 2), but got split_sizes=[4096, 2048]
This looks like one of the files is wrong. Maybe you accidentally used an LTX 2 model, VAE, or LoRA instead of version 2.3. Double check each selection in the Load Models node.
If this is not the reason ensure ComfyUI is up-to-date and all installed custom nodes, too. Maybe your ComfyUI doesn't know LTX 2.3.
@ai839 Thank you very much! my dumbass had the wrong video and audio vae
Best workflow, thank you <3
A quick heads-up 📢: There will be an update to the LTX 2.3 workflow in the coming days. Videos will then be generated much faster, without any noticeable loss in quality. Follow me so you don’t miss the update, and support my work by giving it a like. Thank you very much!
The new version is here, and it’s much faster and even better than version 1.0 👀🚀! Thank you so much for the over 2,000 downloads of version 1.0 in such a short time 🥰! Have fun and Happy Easter 🐰!
