This will take your prompt, enhance it and optimize it for the T5-XXL and CLIP-L text encoders that Flux uses.
It will also automatically export the trigger word from a lora and include it into the separated prompts.
Why did I make this?
People keep asking if they should split their Flux prompts between encoders, and they ask HOW. Now you have a VERY easy way!
The lora auto-trigger is a fun bonus; I really find it helpful.
Have fun creating!
Description
New version based on feedback, as well as my own improvements!
Removed out-of-support Comfyroll nodes, added Flux-specific aspect ratios/resolution nodes.
Separated out the clip loader as most models now use that, and it gives everyone finer control.
Updated Plush node selections based on changes to the nodes. Should prevent errors and limit "server could not process" messages.
Updated LLM instructions to reduce the chance of the LLMs re-using my example prompt as part of the final prompt output.
Added additional lora loader that allows for as many loras as you'd like. Have fun, go nuts... just be mindful of your poor GPU!
Added clarifying usage notes.
FAQ
Comments (8)
Any chance of making a gguf version? Would probably remove the need for an external LLM entirely
Hi, you can repoint Plush nodes to anything you can run locally. So you could either:
A) Run LM Studio or Oobabooga etc. on your own system and point Plush to it
or
B) Replace Plush nodes with VLM Nodes and make it load any LLM you want entirely inside of comfy
There are probably other solutions as well. But to be clear: this is only a .JSON workflow; I am not providing the LLMs so can't provide any GGUF.
@EnragedAntelope I meant ggufs of Flux, not the LLM
@LoraMaker Ahh. You can just drop in Unet Loader (GGUF) as the model loader to user a GGUF checkpoint.
@EnragedAntelope agreed i find good workflows all the time that use the regular non quantized models and i just bypass the regular checkpoint loader node and add the unet gguf loader node right next to it then i hook it to the same place the regular checkpoint loader was hooked to, nearly everything ive tried works this way by just swapping out the checkpoint loader with a unet gguf loader
@EnragedAntelope can you confirm this still works? for me it doesn't put the llm enhanced prompts in the (green) text windows anymore resulting in empty prompts and random output. I don't see errors in comfyUI console output nor when connecting the Troubleshooting output of the Advanced Prompt Enhancer. In fact the latter reports successful api call to groq with token count etc.
My best guess without being able to look into it right now is that because Plush Nodes has been updated over the past few days with new fields added, it caused my selections to go out of ailgnment within the Advanced Prompt Enhancer node. So short term fix for you would be review what you see in the node, right-click, recreate node, then re-enter anything pertinent (your Groq model settings, temperature, etc.).
I will try to take a look over next few days, TBD on that.
Just updated and added a new version, thanks for the heads-up.

