TXT2IMG:
This is the workflow that i'm always using. It's probably not the easiest to use, and it lacks many features, but it does the job ! Use this if you want to get the same results as I do.
To add more LORAs, simply add a lora loader to the already existing chain.
To add more CNet, simply add a new triplet like in the CNet OpenPose block, change the loaded CNet model and chain the previous conditionning output to the conditionning input of your new triplet.
Since 1.2.0, I added a "Character Template" section. This section is intended to provide an easy way to store and reproduce prompts and parameters associated with a character. Simply create a .txt file in the "input" directory of your ComfyUI installation, then on each line, write the prompte like this:
[HEAD];
[BODY];
[EYES];
[LORA]:[WEIGHT];
Do not forget the ';' at the end of each line. You can leave a line empty if you have no prompt to provide to this section, but the ';' must be present regardless.
Each part of the prompt will combine with whatever you wrote on the regular prompt section.
You can of course bypass this section to only use the regular prompt.
I added a example template to the Txt2Img models. Please download them if needed.
Since 1.0.2, the models are stored separately, for easier updating. See the "Txt2Img models" version.
Dont forget to install the missing custom nodes using ComfyUI's manager !
Changelog:
1.2.0:
Added a "Character Template" section
Added a sound player that play when a generation is complete
Added a NipplesDetailer for nudes
1.1.0:
Connected CNet parts to relevant samplers.
1.0.3:
Fixed expression prompt unlinked to concatenated face prompt
1.0.2:
Fixed incorrect general prompt concatenation
1.0.1:
Fixed unlinked post-hf detailer prompts
----------------------------------------------------------------------------------------------
MOSAIC-OUTPAINT:
This worflow is based on an idea from u/wonderflex. You can find the source post here: https://www.reddit.com/r/StableDiffusion/comments/1aexch9/using_mosaic_tiles_to_outpaint_expand_images_3/
Basically it does an outpaint based on a color palette extracted from a part of the original image. It's particularly efficient at adding a lot of background at once without losing too much context.
As usual, do not forget to install relevant nodes using the "Install missing nodes" function.
Description
The models used in the workflow
FAQ
Comments (4)
The way you writing the prompt is really amazing I like it ... great job 🫡
Thank you for making this workflow public! I now understand why you said "the prompt will not suffide".... sadly my machine is toooo old to use this workflow efficiently (how long does it take you to get a result?)
the way you mix the prompt is really interesting, and the service of adding needed models is great!
Could you evtl reveal where to put the controlnet-node as a last thing? (never explored control net in Comfy UI ;))
thumbs up!
I added a new version with CNet already linked.
Generations takes around 15s for the first picture and around a minute for the upscale. I usually disconnect the upscaler or even the detailers when i'm only prototyping.
Oooh this does look schwifty, I had an idea of breaking up prompt like this. Just been to busy screwing around with different forms of noise to do it lol.
Thanks for sharing!
