The Preset Text Node from ComfyUI-Custom-Scripts was broken by the recent ComfyUI update.
Temporary solution: launch your ComfyUI with the following arguments:
--front-end-version Comfy-Org/[email protected]
* Check out my latest workflow, Simplified ComfyUI Workflow Pack!
It includes the new Xlabs Sampler workflows and offers the same functionalities as before, plus support for negative prompts and LoRA loaders for Flux. It's more up-to-date and packed with powerful features!
Overview
This workflow is optimized for the Flux.1 model, using Grok's API (free as of now) with the llama3-70b-8192 model. It supports both Flux.1-Dev fp16 and fp8 versions (other versions are untested).
Features
Built-in Upscaler: Enhance your images with the Ultimate SD Upscaler node for high-resolution results.
LLM Support: Generate or enhance text and captions with the LLM node. Supports both Grok API (free) and OpenAI (paid) via Tara-LLM-Integration.
Predefined Prompt Guides: Four prompt guidance options available: General, Person Photography, Anime, Digital Art. (You can add your own.)
Beginner Friendly: Instructions, model download links, and parameters are included as notes in the workflow.
⚠️ Warning ⚠️
The images you generate will not only contain your workflow but your API key as well. This will expose your Grok/OpenAI API key! Make sure you take care of that before posting it anywhere. You can use this exif remover for free, or any of your choice.
Installation
ComfyUI Manager
Install ComfyUI Manager from this link.
If you encounter red nodes, open ComfyUI Manager, click “Install Missing Custom Nodes,” and restart ComfyUI.
Models
Flux.1-dev-fp8: Download here
Place in: ComfyUI/ComfyUI/models/unet
Clip1: Download here
Place in: ComfyUI/ComfyUI/models/clip
Clip2: Download here
Place in: ComfyUI/ComfyUI/models/clip
VAE: Download here
Place in: ComfyUI/ComfyUI/models/vae
Upscaler Model (ESRGAN_4x): Install via ComfyUI Manager
Go to “Model Manager,” search for “ESRGAN_4x,” and install.
API Key
To get a free Grok API key:
Visit Grok API Console.
Generate your API key.
Add the API key to the ‘Tara LLM Config Node 🌐’ api_key section.
Loading the Workflow
Restart ComfyUI.
Clear the workflow.
Load the Flux_Ultimate_Workflow_With_Upscaler-cthl.json.
Manual vs Enhanced LLM Prompts
Manual Prompt: Connect the Manual Prompt node to the CLIP Text Encode (Prompt)'s 'text' input. Bypass the Tara Advanced LLM Node.
Enhanced LLM Prompts: Connect the Tara Advanced LLM Node’s positive output to the CLIP Text Encode (Prompt)'s 'text' input and enable the bypassed Tara Advanced LLM Node.
Image Dimensions
Set your image width and height in the 'Load Model' group under Image width and Image height. If using the upscaler, ensure the tile_width and tile_height match these dimensions. Otherwise, you can bypass the upscaler node by right-clicking and selecting “bypass.”
Prompt Guide Selection (Presets)
I've realized that the presets are also stored in the browser's local storage, which might cause issues when loading the workflow on different computers or browsers.
You can find the predefined preset instructions in the zip file, labeled as Prompt_Guide_Presets.txt, or you can get it here.
Simply click on the 'manage' button in 'Prompt Guide Selection' node, and you can add them.
This is my first public workflow. Give it a ❤️ if you like it :)
Description
FAQ
Comments (35)
Thanks for sharing the workflow!
Predefined Prompt Guides: Four prompt guidance options available: General, Person Photography, Anime, Digital Art. (You can add your own.)
I don't see those options selectable. Shows only 'default.negative' in the selection.
Please clarify on how to fix.
Have you installed the ComfyUI-Custom-Scripts node from Manager? Because I've used the PresetText|pysssss to create the presets which is part of it. Once it's installed, you can click on the 'Manage' in 'Prompt Guide Selection' node and edit them or add your own.
@cthl Thanks for quick reply! Yes the custom node is installed and the default value it comes with is 230173939-08459efc-785b-46da-93d1-b02f0300c6f4.png (534×211) (user-images.githubusercontent.com)
Didn't realize I need to edit them before running. Thanks!
@dhinakarcom877 You're right! I've realized that the presets are also stored in the browser's local storage, which might cause issues when loading the workflow on different computers or browsers.
You can find the predefined preset instructions in the zip file (updated recently), labeled as Prompt_Guide_Presets.txt, or you can get it from here if you wish to add my guidance, otherwise you can rock with your own :)
Please make workflow for Prompts enhanced sdxl
Hey, thanks for the tip! I'll definitely put together a workflow for SDXL, and I'll also throw in one for SD1.5 soon. Appreciate the suggestion!
dosen,t work , press the queue and nothing happen...no any process,,
That's weird! Got any errors in Comfy? Some popups with red text maybe? Or check the console/terminal and see if there's any traceback errors there and let me know.
I'm really loving this workflow. I did add nodes to preview images before upscale and a save image with metadata and will be attempting to add a lora node.
I have an OpenAI account and key, but have no idea how to link to it. If anyone can point me in right the direction it would be greatly appreciated, since trying to find info on their websites is a bit of a nightmare.
Anyway great workflow, I'm using it all the time now.
Thanks! I also made a before/after preview for the upscaler for myself :)
The easiest way is to simply change the base_url in Tara LLM Config Node to OpenAI's api: https://api.openai.com/v1
and enter your API key.
For the models name, check Models - OpenAI API
Otherwise you can install OpenAI and add the node according to Tara LLM's documentation:
ComfyUI-Tara-LLM-Integration/README.md at main · ronniebasak/ComfyUI-Tara-LLM-Integration (github.com)
@cthl Thanks it's working now ❤. Although a lot of the time the Tara node just returns the original prompt.
@Annamae Make sure you have guidance for the Tara Advanced LLM Node (Instructions) and you can also experiment with the parameters in Tara LLM Config Node 🌐. Unfortunately I don't have access to OpenAI, so I'm not sure what would be the optimal settings there. You can check OpenAI's API Reference documentation if you have time.
Make sure you remove the exif metadata info from your generated images, otherwise it can expose your API key. I have updated the description on the model page :)
@cthl Yeah I hadn't realized about the metadata, thanks. I've now changed my keys and deleted the old ones as a precaution.
I don't know if you realized but Tara works with local LLM's. I tried it not expecting it to work, but amazingly it does. I'm currently running it with koboldcpp and will try it with TabbyAPI later.
@Annamae Yeah, TARA is awesome, I’m really into that project! They’ve also got an 'API Key Saver' and 'API Key Loader' node, which is solves the issue with exposing your key with your images/workflows.
I haven’t tried it with a local LLM yet because I’m always battling for free space, haha!
Really enjoyed this workflow it's become my new favourite. Any plans to make a fluxD one with Dynamic thresholding?
Thank you! Yes, I'm already working on it :) It'll be up soon.
Is there a way to select a different flux model?
You can use the original dev model as well. I'm not certain about Schnell. However, Flux v2 (NF4) won't work since it requires a different loader node. It's currently only available on the dev channel, but I'll share my workflow once it's officially released.
Any method of storing the prompt guidance into a COMFYUI folder that the Node would pick up as options? I find that everytime I reboot my system, I have to enter the guidance again manually. (Not the end of the world, but saving them to the dropdown across workflows would be super convenient.) Just seeing if I am missing an obvious answer.
Unfortunately, it's not possible with that node at the moment since the presets are stored in the browser's localStorage. You can list them by opening the console (F12) and typing this: localStorage["pysssss.PresetText.Presets"]
However, I'm working on an update for this workflow and exploring different alternatives, including writing my own custom node.
EXCELLENT! Is it possible to add lora?
Prompt Guide Selection Field is red, ComfyUI-Custom-Scripts are installed and running
Once the missing nodes installed, try reloading the page in your browser after Comfy has been restarted.
@cthl no missing nodes, checked it twice and restarted comfyui, cleared browser cache, red box still exists
@DarkNeighborSi That's strange. If ComfyUI isn't showing an error popup when you load the workflow related to the presets node, there might be something else going on.
It could be a conflict with other nodes you have installed, but I'm not entirely sure.
If possible, try my latest workflow, it's similar in functionality, has the same options but more up-to-date. Also make sure your ComfyUI is up to date.
I have the exact same problem. Your other workflow makes no difference.
@Psykillergist @cthl After a fresh comfyui installation with updates and installing all missing modules, same problem in both workflows
@DarkNeighborSi My error message, when I try to run it is:
Cannot execute because a node is missing the class_type property.: Node ID '#108'
@Psykillergist @DarkNeighborSi I've been digging around a bit, and it turns out that the recent ComfyUI update broke that custom node a few days ago. Unfortunately, the only solution for now is to revert ComfyUI to the previous version, or wait for an update for the ComfyUI-Custom-Scripts node.
https://github.com/comfyanonymous/ComfyUI/commit/c90459eba0daf15796de2fd7b9e7d8d42f7c7e71
I’m also exploring alternatives in the meantime.
@cthl Thanks for doing the digging! If an update will come out, I'll check your workflow out again. In the meantime I will stick to the one I'm using now.
@cthl I'm glad that you posted the workaround with the additional starting parameters, because after the updates from today/yesterday even my other workflow stopped working. With your tip it started working again and now I tried your workflow and it's working fine as well. A big thanks again!
that miss lora loader
Check this, same but more up to date and uses Xlab's sampler:
generates a black / empty image



