CivArchive
    LTX 2.3 I2V 2.0 | Three-Stage Cinematic Refinement Workflow - v1.0
    NSFW
    Preview 130394315

    This workflow is designed for LTX 2.3 image-to-video generation with a three-stage cinematic refinement pipeline. Its main purpose is to take a single input image as the first-frame visual anchor, generate a stable video from it, then improve the output through multiple latent refinement and upscaling stages so the final result has stronger texture, cleaner motion, and a more polished cinematic look.

    Compared with a basic image-to-video workflow, this version is not built around a single generation pass. It uses a staged structure: first, the input image is prepared and converted into video conditioning; second, LTX 2.3 generates the initial motion and scene continuation; third, the workflow performs latent upscaling and additional refinement passes to improve visual quality. This makes it more suitable for creators who care about final texture, character consistency, detail density, and publishable video output.

    The workflow uses LTX 2.3 as the main video generation backbone, with image preprocessing, LTXVImgToVideoConditionOnly, EmptyLTXVLatentVideo, audio latent routing, custom sampler stages, manual sigma control, tiled VAE decoding, audio decoding, CreateVideo, and final video saving. The source image is resized and compressed into a model-friendly input before entering the video conditioning stage. This helps the model keep the original composition, character identity, scene layout, and lighting direction while still generating motion.

    A key advantage of this workflow is the three-stage rendering logic. The first sampling stage builds the base video structure. The following stages use LTXVLatentUpsampler and additional sampler passes to push the result further, giving the video more detail and stronger visual finish. Instead of simply increasing resolution at the end, the workflow refines the latent video itself, which can help reduce softness, improve edges, and make the final frames feel less like a rough preview.

    The workflow also uses manual sigma schedules and multiple SamplerCustomAdvanced routes. This gives the pipeline a more controlled denoising behavior than a standard one-click sampler. For image-to-video work, this matters because excessive randomness can cause identity drift, unstable clothing, messy background motion, or inconsistent lighting. The staged sampler design is meant to preserve the source image while gradually improving motion and image quality.

    The example prompt in the workflow is structured like a cinematic dialogue shot: stable character placement, clear left-right positioning, fantasy atmosphere, rooftop observatory, magical objects, soft lighting, subtle gestures, and continuous scene logic. This shows that the workflow is especially suitable for character-driven AI video, short fantasy clips, dialogue scenes, anime-to-video tests, product-style cinematic shots, and social media video demonstrations.

    This workflow is useful for Civitai creators, RunningHub publishers, YouTube tutorial makers, Bilibili demonstrations, and anyone who wants a stronger LTX 2.3 image-to-video pipeline than a simple first-frame animation setup. If you want to see how the first image is prepared, how the three refinement stages are connected, and how the final polished video is exported, watch the full tutorial from the YouTube link above.

    ⚙️ Try the Workflow Online

    👉 Workflow: https://www.runninghub.ai/post/2050910752227250177/?inviteCode=rh-v1111

    Open the link above to run the workflow directly online and view the generation results in real time.

    If the results meet your expectations, you can also deploy it locally for further customization.

    🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!

    📺 Bilibili Updates (Mainland China & Asia-Pacific)

    If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.

    📺 Bilibili Video: https://www.bilibili.com/video/BV1uhRyBGEFi/

    I will continue updating model resources on Quark Drive:

    👉 https://pan.quark.cn/s/20c6f6f8d87b

    These resources are mainly prepared for local users, making creation and learning more convenient.

    ⚙️ 在线体验工作流

    👉 工作流: https://www.runninghub.ai/post/2050910752227250177/?inviteCode=rh-v1111

    打开上方链接即可直接运行该工作流,实时查看生成效果。

    如果觉得效果理想,你也可以在本地进行自定义部署。

    🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

    📺 Bilibili 更新(中国大陆及南亚太地区)

    如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

    📺 B站视频: https://www.bilibili.com/video/BV1uhRyBGEFi/

    我会在 夸克网盘 持续更新模型资源:

    👉 https://pan.quark.cn/s/20c6f6f8d87b

    这些资源主要面向本地用户,方便进行创作与学习。

    LTX 2.3 I2V 2.0 | Three-Stage Cinematic Refinement Workflow

    Description

    Workflows
    LTXV 2.3

    Details

    Downloads
    38
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/11/2026
    Updated
    5/14/2026
    Deleted
    -

    Files

    ltx23I2V20ThreeStage_v10.zip

    Mirrors

    CivitAI (1 mirrors)