CivArchive
    πŸ”₯ Gemma 4 Prompt Director for LTX 2.3 β€” Cinematic AI Prompt Engine Workflow - v2.0
    NSFW

    🎬 Overview

    This custom ComfyUI node transforms prompt generation into a full cinematic directing system powered by Gemma 4.

    Instead of writing basic prompts, this node acts as an AI Director, automatically generating:

    • Structured cinematic prompts (LTX 2.3 optimized)

    • Timeline-based scenes (perfect for video generation)

    • Camera direction, environment, subject behavior

    • Sound design layers

    • Genre-aware trailer logic (Netflix-style)

    Whether you're creating short cinematic clips or full 20-second trailers, this node handles everything β€” from concept to final prompt.


    πŸš€ Key Features

    πŸŽ₯ Structured Cinematic Prompts

    • Clean sections:

      • [Camera]

      • [Environment]

      • [Subject]

      • [Timeline]

      • [Sound]

    • Optimized for LTX 2.3 video generation


    🎬 Auto Director System

    Fully automatic scene creation with multiple modes:

    • Smart Auto β†’ balanced, high-quality results

    • Full Auto β†’ complete scene generation

    • Chaos Auto β†’ maximum creativity & variation

    Generates automatically:

    • subjects & relationships

    • environments

    • camera style

    • actions & motion

    • sound design


    🎭 Subject Control

    Choose or randomize:

    • Female subject

    • Male subject

    • Hetero couple

    • Lesbian couple

    • Or let AI decide


    🌍 Expanded Environment Database

    Massive internal preset library:

    • Cinematic interiors

    • Urban & nightlife scenes

    • Nature & fantasy environments

    • Horror & liminal spaces

    • Action & disaster setups

    Auto-selected or manually controlled.


    🎬 Netflix-Style Trailer Mode (NEW)

    Generate full 20-second cinematic trailers with:

    • 5 structured timeline beats

    • Genre-based pacing

    • Dynamic camera progression

    • Sound design adapted to genre

    • Final hook shot

    Supported Genres:

    • Psychological Horror

    • Sci-Fi Thriller

    • Action Spectacle

    • Dark Fantasy

    • Romantic Drama

    • Crime / Mystery

    • Dystopian / Survival

    • And many more


    βš™οΈ Style Presets

    Structured modes include:

    • Structured Romance

    • Structured Power

    • Structured Action

    • Structured Horror

    • Structured Dreamlike

    • Structured Realism

    • Structured POV

    • Structured Epic

    • Structured Minimal


    🧠 Why This Node?

    This is not just a prompt generator.

    It is a:

    πŸ‘‰ Cinematic Prompt Engine
    πŸ‘‰ AI Scene Director
    πŸ‘‰ Video Pre-Production Tool

    Designed specifically for:

    • LTX 2.3 workflows

    • Text-to-video pipelines

    • High-end cinematic AI generation


    πŸ›  Installation Guide (Step-by-Step)

    1. Download the Node

    Download the file:

    gemma4_prompt_gen.py

    2. Place the File

    Move it into:

    ComfyUI/custom_nodes/Gemma4Prompt/

    If the folder doesn’t exist, create it manually.


    3. Verify Structure

    Your folder should look like:

    ComfyUI/
     └── custom_nodes/
          └── Gemma4Prompt/
               β”œβ”€β”€ __init__.py
               └── gemma4_prompt_gen.py

    ⚠️ Do NOT rename __init__.py
    ⚠️ File name must be exactly: gemma4_prompt_gen.py


    4. Install Dependencies

    Make sure you have:

    • Python 3.10+

    • CUDA working (for GPU inference)

    • llama.cpp build with CUDA support


    5. Setup llama-server

    Download a CUDA-enabled build and run:

    llama-server.exe -m YOUR_MODEL.gguf -ngl 60 --ctx-size 4096

    Make sure it's running on:

    http://127.0.0.1:8080

    6. Restart ComfyUI

    Launch ComfyUI again and check:

    • Node appears as Gemma4PromptGen

    • No errors in console


    7. Load Workflow

    Use your existing workflow or connect:

    • Instruction input

    • Optional environment / subject

    • Output prompt β†’ LTX pipeline


    🎬 Cinematic Clip

    • Mode: AUTO

    • Style: Structured

    • Auto Director: Smart Auto


    πŸŽ₯ Netflix Trailer

    • Style: Trailer

    • Genre: Psychological Horror (example)

    • Intensity: Balanced or High Impact

    • Sound: Auto by Genre


    πŸ”₯ Full Automatic Generation

    • Auto Director: Full Auto

    • Subject: Off

    • Environment: LLM decides


    πŸ’‘ Final Notes

    • Higher token limits = longer, richer prompts

    • Structured mode is best for LTX

    • Trailer mode is ideal for storytelling


    🎯 Result

    With this node, you move from:

    ❌ basic prompts
    ➑️ to
    πŸ”₯ AI-directed cinematic storytelling


    If you like it, drop a like ⭐
    and share your generations β€” I’d love to see what you create!

    Description

    null

    FAQ

    Comments (13)

    dirtysemApr 9, 2026Β· 1 reaction
    CivitAI

    AttributeError: 'Gemma4PromptGen' object has no attribute '_vision_active'.

    Won't start(

    magine667Apr 9, 2026

    For Vision, you will need the mmproj file for the model;

    it is best to download "gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf" and then start the llama-server using the following command

    llama-server.exe -m β€˜path to your gemma4 model file’ --mmproj β€˜path to your mmproj.gguf’

    here you can find the modelfiles:
    https://huggingface.co/nohurry/gemma-4-26B-A4B-it-heretic-GUFF/tree/main

    lanceshockerApr 9, 2026

    @magine667Β I was having the same issue as well. Even when I had all the necessary models, it still gave me that error.

    I may not be understanding step 5 fully as I do not know how to launch the llama server or input that string of text. Do I open my CMD? llama.exe? In comfyui?

    magine667Apr 9, 2026

    @lanceshocker : The best option is to start the llama-server in cmd.
    I use this command in cmd to start it:
    llama-server.exe -m models/gemma-4-26b-a4b-it-heretic.q4_k_m.gguf --mmproj /models/gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf --host 127.0.0.1 --port 8033

    In comfyUi I must then change the port from 8080 to 8033 but this is not a real issue

    So if you use the settings for the llama server from comfyUI then you can leave out the host and port parameter

    AttributeError: 'Gemma4PromptGen' ε―Ήθ±‘ζ²‘ζœ‰ε±žζ€§ '_vision_active'

    dirtysemApr 10, 2026

    @magine667Β @magine667Β  The llama server is installed and running in a separate tab on port 8080 and is working properly. The error is still present

    P.S. I understand why this error occurs. The Models folder was moved out of the LLama folder. Place the Models folder back in the LLama folder

    bobjane01539Apr 10, 2026

    i had this issue and i found it was causeed by ghost llama-servers running in the background , just closed them using task managers

    weijingjing3624180Apr 9, 2026
    CivitAI

    got prompt

    [Gemma4PromptGen] generation_mode=IMG2VIDEO β€” t2v_mode=False

    [Gemma4PromptGen] ⚠️ ltx_style_mode 'Off' legacy/non valido β†’ 'Structured'

    [Gemma4PromptGen] ⚠️ auto_director_mode 'Auto by Style' legacy β†’ 'Off'

    [Gemma4PromptGen] Vision enabled with mmproj: C:\models\gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf

    [Gemma4PromptGen] llama-server starting (vision enabled via gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf), waiting for health check...

    FETCH ComfyRegistry Data: 20/137

    [Gemma4PromptGen] βœ… llama-server started (17s) β€” vision enabled via gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf

    [Gemma4PromptGen] Image sent as base64 vision input

    FETCH ComfyRegistry Data: 25/137

    ============================================================

    GEMMA4 PROMPT GEN β€” 🎬 LTX 2.3 β€” video, cinematic arc + audio

    AUTO (generated + sending now):

    ============================================================

    ============================================================

    [Gemma4PromptGen] llama-server process released.

    Prompt executed in 33.93 seconds

    FETCH ComfyRegistry Data: 30/137

    CannondaleApr 9, 2026
    CivitAI

    This doesn't actually work it's largely a scam, the only good thing it does is give you a local interface via a web browser to generate the prompts, but actually getting it to work inside ComfyUi is largely a scam and will never actually work it uses an incredibly convoluted method of requiring the URL link to the local interface AND another text encoder which makes no sense since it's pulling everything from the external server to begin with but when you do add that text encoder it will give an error rather than create a prompt.

    bobjane01539Apr 16, 2026

    seems to work perfecty fine offline , works for me

    FferrettApr 29, 2026Β· 1 reaction

    Learning how to integrate novel nodes can be a challenge. Took me about an hour to get this one running and it does everything that the creator claims. Don't give up. You need to run an instance of llama-server locally with an albliterated Gemma4.. a E4B Q4 gguff and aligned mmproj works fine. I like integrated nodes as well, but this does work fine.

    bobjane01539Apr 10, 2026
    CivitAI

    good work! I i gotten v1 and v2 to work , for me v2 is not generating dialogue much at all, v1 generates plenty of dialogue

    FferrettApr 28, 2026
    CivitAI

    Took a bit to figure out, but its working. Recommend better setup instructions for more adoption. Not a big fan of the distilled workflow, it seems to always fuzz out. Added to the basde ltx2.3 i2v comfyui workflow and its creative and interesting.

    Workflows
    LTXV 2.3

    Details

    Downloads
    453
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/8/2026
    Updated
    5/14/2026
    Deleted
    -

    Files

    Gemma4PromptDirectorForLTX2_v20.zip

    Mirrors