CivArchive
    LTX 2.3 Unified Sampler (WIP) - v1.4.1
    NSFW
    Preview 116833463
    Preview 117081638

    LTX 2.3 Unified Sampler WIP

    -I2V
    -T2V
    -V2V Depth
    -V2V Outpaint

    Description

    v1.4.1

    Minor settings tweaks

    v1.4

    Added AIO workflow (T2V, I2V, T2V+AUDIO)

    v1.3

    "Full Sigmas on Stage1" False by default (faster)

    v1.2

    Added GGUF support

    LTX2_T2V_CRT_v1.3_GGUF

    v1.1

    "Full Sigmas on Stage1" option inside "Inference" subgraph

    FAQ

    Comments (43)

    Cloud_RonJan 9, 2026
    CivitAI

    ComfyUI_LayerStyle seem to be broken causing vram cleanup not loading which I fixed by replacing with simple VRAM cleanup. But still not working:

    LTXVEmptyLatentAudio

    'VAE' object has no attribute 'latent_frequency_bins'

    no clue how to fix this.

    pgc
    Author
    Jan 9, 2026

    None of this is broken, I can't tell you what's wrong but i'm pretty sure Grok or gemini will tell,

    as long as you post the console error "not the message popup"

    GamerGTV2Jan 9, 2026

    @Cloud_Ron Update all custom nodes. This helped me get the error to stop. Currently seeing if I can get some output. Been fighting with this for 3 hours.

    pgc
    Author
    Jan 9, 2026

    @GamerGTV2 Hope you will find out , what was your issue?

    GamerGTV2Jan 9, 2026

    @pgc Biggest fight was getting llama-cpp to install. But this workflow is working very well now. Also the folder for llm_gguf was not created automatically in the models folder so it took me a minute to figure out where to put the gemma text encoder for the prompt enhancer.

    pgc
    Author
    Jan 9, 2026

    @GamerGTV2 Yeah llama.cpp needs to be compiled, there is some wheels but finding the proper one that match your windows/linux, python, torch, cuda version is way more tedious than compiling it with a simple command, it talkes about 10min or so to compile.

    Yes the llm_gguf needs to be created, I will add this to the description 👍

    GamerGTV2Jan 9, 2026

    @pgc thanks a ton!

    Cloud_RonJan 10, 2026

    Thank you guys. Update all did the trick and its running fine on 5080 16GB VRAM

    DJLegendsJan 13, 2026

    @Cloud_Ron i have the same gpu so hows the render times?

    Cloud_RonJan 14, 2026

    @DJLegends its taking 825s on T2V but the AIO is not optimised for mid range VRAM and takes quite long depending on the input resolution: 600x690 was upscaled to 896x1024 and takes 2422s while 128x128 input was upscaled to 960x960 and takes 517s. The same T2V generation takes 248s on Cuda 13 but AIO is not working on Cuda 13 yet.

    Infinite_MonkeysJan 9, 2026
    CivitAI

    The zip file seems to only contain a .png of the workflow, not the .json file.

    pgc
    Author
    Jan 9, 2026

    There is no difference, with a png you have the benefits of seeing the workflow without having comfyUI running, just drag and drop like a json

    Infinite_MonkeysJan 11, 2026

    Thanks. I should have realized.

    GamerGTV2Jan 9, 2026· 1 reaction
    CivitAI

    I'll say this is the only workflow I've been able to make work well on my setup. RTX 4090 and 64 GB's of RAM. Thank you OP!

    ChamplooyJan 10, 2026

    I have the same specs but it takes 230 sec to generate a 2:3 (1megapixel / 960x960 / 6 seconds) vid.
    How about you?

    GamerGTV2Jan 10, 2026

    It can vary, between 230 and 300 seconds. But it also depends on what is being generated and the length of the video. I was just stoked I got a workflow to work. I look forward to seeing further developments.

    ChamplooyJan 10, 2026

    @GamerGTV2 OK thx, I was just wondering because I think it's supposed to be much quicker, but yeah same

    pgc
    Author
    Jan 10, 2026

    On the last version I've set "Full Sigmas on Stage1" False by default,

    This won't add extra steps for the stage 2 so it should be faster than before

    ChamplooyJan 11, 2026

    @pgc thx, I was already using that version though

    bibleforall777303Jan 9, 2026
    CivitAI

    Error: "Failed to load model from file: C:\ComfyUI-Easy-Install-Windows\ComfyUI-Easy-Install\ComfyUI\models\llm_gguf\huihui-Gemma-3n-E4B-it-abliterated-Q4_0.gguf"

    Even though I created a folder called ComfyUI\models\llm_gguf\ and put huihui-Gemma-3n-E4B-it-abliterated-Q4_0.gguf there.

    How do I fix this?

    pgc
    Author
    Jan 9, 2026

    Did you have llama.cpp installed?

    @pgc No. How do I do that? Where do I download it from, and what folder do I install it in?

    KaindeMortJan 10, 2026
    CivitAI

    For some reason i get this types of error with almost every option.


    VHS_VideoCombine

    An error occured in the ffmpeg subprocess: [aac @ 0x56506cd566c0] Input contains (near) NaN/+-Inf [aost#0:1/aac @ 0x56506cd56080] [enc:aac @ 0x56506cd56640] Error submitting audio frame to the encoder [aost#0:1/aac @ 0x56506cd56080] [enc:aac @ 0x56506cd56640] Error encoding a frame: Invalid argument [aost#0:1/aac @ 0x56506cd56080] Task finished with error code: -22 (Invalid argument) [aost#0:1/aac @ 0x56506cd56080] Terminating thread with return code -22 (Invalid argument)




    Only H264 sometimes can pass further but than i get black screen in output and audio noises.
    Any thoughts? Already updated comfy, recompile sage, cuda 13 and fresh ffmpeg with support NVENC.

    entllojs525Jan 11, 2026
    CivitAI

    LTXVEmptyLatentAudio

    'VAE' object has no attribute 'latent_frequency_bins'

    how to fix this?

    pgc
    Author
    Jan 11, 2026

    Make sure you are using the right VAE models, audio and video vae are different ones

    entllojs525Jan 11, 2026

    @pgc i update kj nodes and it work now)

    DJLegendsJan 14, 2026
    CivitAI

    trying with distilled version and video quality have tons of image distortions :(

    pgc
    Author
    Jan 14, 2026

    @GamerGTV2 Try to use --reserve-vram 4 to lower your VRAM usage, unless you do very high dimensions or length, you should not have this type of issues, If comfyUI completely crash, it's probably the RAM getting full, OOM doesn't require restarting comfy, at least if you just have an error popping.

    You can also try to use the non-distilled model with distilled loras at low strength like 0.4, cfg 1, 10 steps or so,

    GamerGTV2Jan 14, 2026

    @pgc  sorry I was speaking in general about the full dev model. And honestly I was using a different workflow with the full dev model. My comment doesn’t necessarily apply to this workflow. I’ll delete my comment

    DJLegendsJan 19, 2026

    @pgc okay used reserved vram 2 since i have 16gb vram looks like your latest version works!

    DJLegendsJan 19, 2026
    CivitAI

    with finally getting results... HOLY FUCK DISTILLED MODEL LOOKS SOO GOOD!!!

    Did you also tried fp8 wihtout distill?

    pgc
    Author
    Mar 4, 2026· 1 reaction

    @TheKnightsWhoSayNI This workflow needs to be updated https://huggingface.co/Kijai/LTXV2_comfy

    There is two new model loaders, and the previous clip loader should be replaced by a classic dual clip loader

    LTXV Chunk FeedForward node could also be added for longer videos

    @pgc Ohh good 2 know, thank you for replying me

    GamerGTV2Jan 22, 2026
    CivitAI

    Here is a good link I've used to get the prompt enhancer working with llama-cpp-python

    I've installed the ComfyUI-JoyCaption custom node.

    Then copy the llama_cpp_install.py to the .\python_embeded directory.

    Then run .\python_embeded\python.exe llama_cpp_install.py

    Source: https://1038lab.github.io/ComfyUI-JoyCaption/llama_cpp_install/llama_cpp_install.html

    jb23Jan 23, 2026

    What node did you use after setting this up? From what I can tell all the JoyCaption nodes require an image.

    GamerGTV2Jan 23, 2026· 1 reaction

    @jb23 I didn't use the JoyCaption node. I only used it to obtain the python script installer for llama_cpp

    llama_cpp_install.py

    I placed this script file into my python_embedded folder and ran the script from there. It takes about 15-20 minutes to compile so it takes some patience. It will complete though.

    This helps get the Prompt Enhancer working. llama_cpp was the most time consuming and frustrating thing to figure out.

    jb23Jan 23, 2026· 1 reaction

    @GamerGTV2 Thanks. I figured it out. I had everything set up except for the gguf models in the llm_gguf folder, which was making it so the model field was inaccessible. I've got it working now. Thanks so much for posting this!

    berezka_dudeFeb 23, 2026

    Where i can get install_llama_official.py?

    GamerGTV2Mar 12, 2026

    I updated my comment to remove the word official, the python script that needs installed into the python_embedded directory is: llama_cpp_install.py

    DJLegendsJan 23, 2026· 3 reactions
    CivitAI

    any chance to get a regular imgToVid workflow?

    TheKnightsWhoSayNIFeb 13, 2026
    CivitAI

    God Bless you @pgc

    jd666Feb 18, 2026
    CivitAI

    Anyway you can update it for new guidance nodes?

    Workflows
    LTXV
    by pgc

    Details

    Downloads
    1,138
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    1/9/2026
    Updated
    4/27/2026
    Deleted
    4/17/2026

    Files

    ltx2WorkflowT2V_v13.zip

    Mirrors

    CivitAI (1 mirrors)

    ltx2WorkflowT2V_v12.zip

    Mirrors

    CivitAI (1 mirrors)

    ltx2WorkflowT2V_v10.zip

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)

    ltx2WorkflowT2V_v14.zip

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)

    ltx2WorkflowT2V_v141.zip

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)