π¬ Overview
This custom ComfyUI node transforms prompt generation into a full cinematic directing system powered by Gemma 4.
Instead of writing basic prompts, this node acts as an AI Director, automatically generating:
Structured cinematic prompts (LTX 2.3 optimized)
Timeline-based scenes (perfect for video generation)
Camera direction, environment, subject behavior
Sound design layers
Genre-aware trailer logic (Netflix-style)
Whether you're creating short cinematic clips or full 20-second trailers, this node handles everything β from concept to final prompt.
π Key Features
π₯ Structured Cinematic Prompts
Clean sections:
[Camera][Environment][Subject][Timeline][Sound]
Optimized for LTX 2.3 video generation
π¬ Auto Director System
Fully automatic scene creation with multiple modes:
Smart Auto β balanced, high-quality results
Full Auto β complete scene generation
Chaos Auto β maximum creativity & variation
Generates automatically:
subjects & relationships
environments
camera style
actions & motion
sound design
π Subject Control
Choose or randomize:
Female subject
Male subject
Hetero couple
Lesbian couple
Or let AI decide
π Expanded Environment Database
Massive internal preset library:
Cinematic interiors
Urban & nightlife scenes
Nature & fantasy environments
Horror & liminal spaces
Action & disaster setups
Auto-selected or manually controlled.
π¬ Netflix-Style Trailer Mode (NEW)
Generate full 20-second cinematic trailers with:
5 structured timeline beats
Genre-based pacing
Dynamic camera progression
Sound design adapted to genre
Final hook shot
Supported Genres:
Psychological Horror
Sci-Fi Thriller
Action Spectacle
Dark Fantasy
Romantic Drama
Crime / Mystery
Dystopian / Survival
And many more
βοΈ Style Presets
Structured modes include:
Structured Romance
Structured Power
Structured Action
Structured Horror
Structured Dreamlike
Structured Realism
Structured POV
Structured Epic
Structured Minimal
π§ Why This Node?
This is not just a prompt generator.
It is a:
π Cinematic Prompt Engine
π AI Scene Director
π Video Pre-Production Tool
Designed specifically for:
LTX 2.3 workflows
Text-to-video pipelines
High-end cinematic AI generation
π Installation Guide (Step-by-Step)
1. Download the Node
Download the file:
gemma4_prompt_gen.py2. Place the File
Move it into:
ComfyUI/custom_nodes/Gemma4Prompt/If the folder doesnβt exist, create it manually.
3. Verify Structure
Your folder should look like:
ComfyUI/
βββ custom_nodes/
βββ Gemma4Prompt/
βββ __init__.py
βββ gemma4_prompt_gen.pyβ οΈ Do NOT rename __init__.py
β οΈ File name must be exactly: gemma4_prompt_gen.py
4. Install Dependencies
Make sure you have:
Python 3.10+
CUDA working (for GPU inference)
llama.cpp build with CUDA support
5. Setup llama-server
Download a CUDA-enabled build and run:
llama-server.exe -m YOUR_MODEL.gguf -ngl 60 --ctx-size 4096Make sure it's running on:
http://127.0.0.1:80806. Restart ComfyUI
Launch ComfyUI again and check:
Node appears as Gemma4PromptGen
No errors in console
7. Load Workflow
Use your existing workflow or connect:
Instruction input
Optional environment / subject
Output prompt β LTX pipeline
β‘ Recommended Settings
π¬ Cinematic Clip
Mode:
AUTOStyle:
StructuredAuto Director:
Smart Auto
π₯ Netflix Trailer
Style:
TrailerGenre:
Psychological Horror(example)Intensity:
BalancedorHigh ImpactSound:
Auto by Genre
π₯ Full Automatic Generation
Auto Director:
Full AutoSubject:
OffEnvironment:
LLM decides
π‘ Final Notes
Higher token limits = longer, richer prompts
Structured mode is best for LTX
Trailer mode is ideal for storytelling
π― Result
With this node, you move from:
β basic prompts
β‘οΈ to
π₯ AI-directed cinematic storytelling
If you like it, drop a like β
and share your generations β Iβd love to see what you create!
Description
FAQ
Comments (13)
AttributeError: 'Gemma4PromptGen' object has no attribute '_vision_active'.
Won't start(
For Vision, you will need the mmproj file for the model;
it is best to download "gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf" and then start the llama-server using the following command
llama-server.exe -m βpath to your gemma4 model fileβ --mmproj βpath to your mmproj.ggufβ
here you can find the modelfiles:
https://huggingface.co/nohurry/gemma-4-26B-A4B-it-heretic-GUFF/tree/main
@magine667Β I was having the same issue as well. Even when I had all the necessary models, it still gave me that error.
I may not be understanding step 5 fully as I do not know how to launch the llama server or input that string of text. Do I open my CMD? llama.exe? In comfyui?
@lanceshocker : The best option is to start the llama-server in cmd.
I use this command in cmd to start it:
llama-server.exe -m models/gemma-4-26b-a4b-it-heretic.q4_k_m.gguf --mmproj /models/gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf --host 127.0.0.1 --port 8033
In comfyUi I must then change the port from 8080 to 8033 but this is not a real issue
So if you use the settings for the llama server from comfyUI then you can leave out the host and port parameter
AttributeError: 'Gemma4PromptGen' 对豑沑ζε±ζ§ '_vision_active'
@magine667Β @magine667Β The llama server is installed and running in a separate tab on port 8080 and is working properly. The error is still present
P.S. I understand why this error occurs. The Models folder was moved out of the LLama folder. Place the Models folder back in the LLama folder
i had this issue and i found it was causeed by ghost llama-servers running in the background , just closed them using task managers
got prompt
[Gemma4PromptGen] generation_mode=IMG2VIDEO β t2v_mode=False
[Gemma4PromptGen] β οΈ ltx_style_mode 'Off' legacy/non valido β 'Structured'
[Gemma4PromptGen] β οΈ auto_director_mode 'Auto by Style' legacy β 'Off'
[Gemma4PromptGen] Vision enabled with mmproj: C:\models\gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf
[Gemma4PromptGen] llama-server starting (vision enabled via gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf), waiting for health check...
FETCH ComfyRegistry Data: 20/137
[Gemma4PromptGen] β llama-server started (17s) β vision enabled via gemma-4-26B-A4B-it-heretic-mmproj.bf16.gguf
[Gemma4PromptGen] Image sent as base64 vision input
FETCH ComfyRegistry Data: 25/137
============================================================
GEMMA4 PROMPT GEN β π¬ LTX 2.3 β video, cinematic arc + audio
AUTO (generated + sending now):
============================================================
============================================================
[Gemma4PromptGen] llama-server process released.
Prompt executed in 33.93 seconds
FETCH ComfyRegistry Data: 30/137
This doesn't actually work it's largely a scam, the only good thing it does is give you a local interface via a web browser to generate the prompts, but actually getting it to work inside ComfyUi is largely a scam and will never actually work it uses an incredibly convoluted method of requiring the URL link to the local interface AND another text encoder which makes no sense since it's pulling everything from the external server to begin with but when you do add that text encoder it will give an error rather than create a prompt.
seems to work perfecty fine offline , works for me
Learning how to integrate novel nodes can be a challenge. Took me about an hour to get this one running and it does everything that the creator claims. Don't give up. You need to run an instance of llama-server locally with an albliterated Gemma4.. a E4B Q4 gguff and aligned mmproj works fine. I like integrated nodes as well, but this does work fine.
good work! I i gotten v1 and v2 to work , for me v2 is not generating dialogue much at all, v1 generates plenty of dialogue
Took a bit to figure out, but its working. Recommend better setup instructions for more adoption. Not a big fan of the distilled workflow, it seems to always fuzz out. Added to the basde ltx2.3 i2v comfyui workflow and its creative and interesting.