CivArchive
    Kobold LLM Prompter - v1.4
    NSFW
    Preview 122139438
    Preview 122140673
    Preview 122140761
    Preview 122149042
    Preview 122149133
    Preview 122149195
    Preview 122151289
    Preview 122151387
    Preview 122153968
    Preview 122155192
    Preview 122155240
    Preview 122157237

    🎨 KoboldCpp Prompt Engine for ComfyUI

    Transform simple ideas into high-production prompts. This node integrates local LLMs via KoboldCpp to expand basic input into descriptive Prompts, ranging from Danbooru tags to technical staging.

    >> to the Installation Guide <<

    Version 1.4 – The "Precision & Control" Update

    With the release of Version 1.4, you now have full creative control over the engine's behavior: you can write your own Custom System Prompt and manually extend the Output Filter to suit your needs.

    🛠️ New Features in v1.4

    • User Custom Mode:

      • A new operational mode added to the mode selection dropdown.

      • This mode acts as a master override for the built-in logic presets (like SDXL or Natural Sentence).

    • Dynamic System Messaging (sys_msg):

      • A dedicated optional input field that allows you to feed your own system instructions directly into the LLM.

      • When the "User Custom" mode is active and this field is populated, the engine uses your specific text as the primary instruction set for the generation.

    • Multi-Phrase Filter (filter_plus):

      • An advanced filtering input designed to handle stubborn AI artifacts.

      • It supports comma-separated values, allowing you to strip multiple specific words, phrases, or conversational "noise" (temporarily extending the internal filter) in a single pass.

      • The engine automatically parses the comma-separated string, splits it by commas, and adds each entry to the regex exclusion list dynamically.

    • Enhanced Debug Feedback:

      • The console logging system has been upgraded for better transparency.

      • It now explicitly identifies when a Custom System Message is overriding the presets.

      • The debug log displays the active Filter+ keywords (exclusion list), so you can verify exactly which terms are being scrubbed from the final prompt.


    Update Notes v1.3: The "Smart Logic" Overhaul

    This update significantly improves how wildcards and brackets are handled, bringing more control and variety to your prompting workflow.

    What's New:

    • Stable Seed Logic: All wildcards and {a|b} brackets now use MD5-hashing. This ensures that your prompts remain 100% consistent when using a fixed seed, while still being perfectly randomized on "randomize".

    • Smart Auto-Pooling (Fixed): The engine now remembers used terms from your wildcard files. It will cycle through the entire list before repeating a word, ensuring maximum variety in long batches.

    • Recursive Wildcard Search: Wildcard files are now found automatically even within subfolders.

    • Enhanced Repetition Penalty: Improved the integration of the repetition_penalty (RepPen) for KoboldCpp. This prevents the LLM from getting stuck in loops or reusing the same descriptive adjectives too often, resulting in much more diverse and creative prompt expansions.

    • Visual Debugging: New color-coded console logs (including [POOL RESET] alerts) help you track exactly how your input is being resolved.

    !!Bug Fix Update Version 1.2(1)

    I have refined the node's logic to fix the following issues:

    1. Targeted Token Allocation: The +250 token bonus is now technically isolated to the "Thinking" mode only. All other modes (SDXL Tags, Natural Sentence, Z-Engineer) strictly adhere to the limit set in the UI.

    2. Correct Category Recognition: A conditional logic error was corrected. The script now reliably identifies the selected mode and sends the appropriate system instructions to KoboldCpp without cross-mode interference.


    🚀 Update: Version 1.2 – The "Smart Fusion" Update

    This version merges my advanced Smart Wildcard logic with the high-performance Kobold LLM Prompter engine.

    What’s New?

    • 🧠 Optimized "Thinking" Mode: Specifically designed for Reasoning Models. The internal filter has been significantly improved to reliably strip <think> tags and meta-chatter, delivering a much more robust and cleaner visual prompt.

    • ✍️ Direct Wildcard Support: You can now use wildcards (e.g., __subject__) directly inside the node's text input. The engine resolves them locally before sending the final context to the LLM.

    • ♻️ Auto-Pooling & No-Repeat: Your wildcards are now handled by a smart pooling system. A file will be completely exhausted (all lines used once) before any term is repeated.

    • 📊 Live Pool-Analytics: The console tracks your wildcard "health" in real-time. You can see exactly how many items are left in a file before it resets directly in the log (e.g., Pool: 12/50).

    UPDATE: Version 1.1 - The "Thinking" Update 🚀

    Optimized for Reasoning Models (DeepSeek-R1, Qwen-Thinking, o1).

    Key Features

    • "Thinking" Mode: Enables Chain-of-Thought (CoT). The LLM plans the composition internally before generating the prompt.

    • Automatic Filtering: Removes all internal reasoning (<think>...</think>) and meta-text (e.g., "Here is your prompt") so only the clean visual prompt reaches ComfyUI.

    • Token Buffer: Automatically adds +250 tokens in Thinking mode to prevent prompts from being cut off by lengthy reasoning.

    • Source Cleaner: Strips out dataset artifacts like ``.

    How to Use

    1. Select "Thinking" in the Mode dropdown.

    2. Use a reasoning-capable model in KoboldCpp.

    3. Note: start_helper is disabled in this mode to prioritize the <think> tag.

    4. Enable "debug" to view the LLM's internal logic in the console.

    Tip: If the output still truncates, increase max_tokens. Reasoning consumes a significant portion of the context window.

    Workflow Components (Included in v1.1)

    This workflow automates prompt engineering by connecting your local LLM to ComfyUI. It requires the following custom nodes:

    1. KoboldLLMPrompter: The core engine for prompt expansion.

    2. Wildcard Saver: Automatically archives every generated prompt.

    3. LazySmartWildcards: Manages dynamic inputs and wildcard processing.

    🚀 Quick Start

        Installation: Save LLM_Wildcard.py in ComfyUI/custom_nodes/.

        Backend: Ensure KoboldCpp is running a compatible model (Llama, Mistral, or Qwen).

        Connection: Set the URL to your local API (default: http://127.0.0.1:5001).

    🧠 Generation Modes:

    SDXL (Tags) * Best For: SDXL / Pony-based models.   

    • Output Style: Converts input into comma-separated Danbooru-style tags

    Natural Sentence * Best For: Flux.1, SD3, or Midjourney-style prompting.

    •     Output Style: Creates a cohesive, cinematic paragraph naturally fusing the subject, style, environment, and lighting.

    Z-Engineer Best For: Qwen3-Z-Engineer Models* or similar high-parameter models.

    •     Output Style: A production-focused, ~200-250 word paragraph with a deep focus on visual staging, lighting physics, and material textures.

    🛠️ Key Functions:

        Style Selection:

    • Choose from 14 aesthetics (e.g., Cyberpunk, DSLR, Anime) or use Random to cycle styles based on the seed.

        Start Helper:

    • Force-starts the AI with specific phrases to bypass conversational "chatter" and ensure consistency.

        Filter:

    • Internal logic that automatically strips AI artifacts like "Sure! Here is your prompt" and cleans up unfinished sentences.

    ⚙️ General Settings:

    Temperature Advice:

    • Use 0.2 – 0.5 for literal, prompt-loyal results.

    • Use 0.8 – 1.2 for creative variety and unexpected descriptions.

    Max Tokens Advice:

    •         Low (50–150): Perfect for SDXL (Tags) to keep them punchy.

    •         High (100): Necessary for Natural Sentence or 

    •      High+ (250+): for Z-Engineer.

    *I found this Model and the prompting template so effective that I decided to integrate them directly.


    'VibeCoded' I'll try my best, or you can always ask Gemini or ChatGPT.

    Description

    🛠️ New Features in v1.4

    • User Custom Mode:

      • A new operational mode added to the mode selection dropdown.

      • This mode acts as a master override for the built-in logic presets (like SDXL or Natural Sentence).

    • Dynamic System Messaging (sys_msg):

      • A dedicated optional input field that allows you to feed your own system instructions directly into the LLM.

      • When the "User Custom" mode is active and this field is populated, the engine uses your specific text as the primary instruction set for the generation.

    • Multi-Phrase Filter (filter_plus):

      • An advanced filtering input designed to handle stubborn AI artifacts.

      • It supports comma-separated values, allowing you to strip multiple specific words, phrases, or conversational "noise" (temporarily extending the internal filter) in a single pass.

      • The engine automatically parses the comma-separated string, splits it by commas, and adds each entry to the regex exclusion list dynamically.

    • Enhanced Debug Feedback:

      • The console logging system has been upgraded for better transparency.

      • It now explicitly identifies when a Custom System Message is overriding the presets.

      • The debug log displays the active Filter+ keywords (exclusion list), so you can verify exactly which terms are being scrubbed from the final prompt.

    Other
    ZImageTurbo

    Details

    Downloads
    122
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/23/2026
    Updated
    3/29/2026
    Deleted
    -

    Files

    koboldLLMPrompter_v14.zip

    Mirrors

    CivitAI (1 mirrors)