This will take your prompt, enhance it and optimize it for the T5-XXL and CLIP-L text encoders that Flux uses.
It will also automatically export the trigger word from a lora and include it into the separated prompts.
Why did I make this?
People keep asking if they should split their Flux prompts between encoders, and they ask HOW. Now you have a VERY easy way!
The lora auto-trigger is a fun bonus; I really find it helpful.
Have fun creating!
Description
Helpful tips and links inside workflow.
FAQ
Comments (11)
dammm!
thanks for sharing :)
really nice and very detailed, yet very few interaction required
hello,why this error:
Error occurred when executing concat: ConcatenateFields.concatenate_text() got an unexpected keyword argument 'string_b' File "/data/sdweb/ComfyUI/execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/data/sdweb/ComfyUI/execution.py", line 192, in get_output_data return_values = mapnode_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/data/sdweb/ComfyUI/execution.py", line 169, in mapnode_over_list process_inputs(input_dict, i) File "/data/sdweb/ComfyUI/execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs))
Hi, I just started getting this error with the Mikey Concatenate node as well. It will likely be fixed by the dev soon. If not it can be replaced by the Tiny Terra Text Concat node or other text concat nodes.
If the error persists I will try to update this workflow, but it only seems to have started today so let's give the dev a little time to address.
@EnragedAntelope @dzy1128 it might be fixed now... I was getting the error earlier this evening, and then suddenly it stopped... I thought that it was because I had another copy of the ComfUI running... but maybe it was because it got updated to fix the error
@BlarpWibble Hi, just FYI I added a new version that should reduce the chances of this sort of error occurring again. Thank you for letting me know!
Wow... how has this got so few thumbs up... it is amazing, I am loving it, using it through groq... and it does an amazing job. Ingenious method of using AI to create an L prompt and a T5xx prompt to enable the best image generations... amazing!
I may see if I can hack it to make a multi-lora version... if I do, it is only because you have created the foundations of such an amazing workflow.
You have two thumbs up from me, and I am off to generate some images with a LOT more thumbs so you can have more thumbs ups!! This deserves to be recognised.
Thank you so much! I am happy you're enjoying it. I may update a bit further as I've refined the instructions since posting this, and because Plush nodes have been updated so I think my defaults may throw an error compared to the layout of the new nodes.
I do have this working locally with multi-loras :). Depending on the checkpoint it can blow up your vram though, be careful!
@EnragedAntelope that is something that I am a little limited on... although having 2 GPUs (one 12Gb and one 8Gb) I have tried using the dev multi-gpu facility and it (kind of) works ok... so... maybe there will be some tweaking going on... but... yes, thank you for doing this, and would be interested in the multi-lora anyway, as a matter of interest
@BlarpWibble Hi, check out the new version with an infinite lora loader added :)
@EnragedAntelope oh wow! ... does it come with infinite VRAM? :o ... I will give it a test out... and report back


