Hey everyone! Newbie ComfyUI user here. I struggled to find a good inpainting workflow for automatically masking and changing clothes, so after a lot of trial and error, here’s what I came up with. It's not perfect, but it works surprisingly well for me, and hopefully, it’ll be useful to you too.
This workflow focuses on making image editing a bit more streamlined. It uses automatic segmentation to identify and mask elements like clothing and fashion accessories. Then it uses ControlNet to maintain image structure and a custom inpainting technique (based on Fooocus inpaint) to seamlessly replace or modify parts of the image (in the SDXL version).
Here’s a breakdown of the process:
Automatic Masking: Uses semantic segmentation to automatically create masks for clothes and fashion elements.
Image Preparation: Crops and prepares the image for editing.
Structure Preservation: Employs ControlNet to maintain image structure (in the SDXL version, Flux didn't need that in my testing).
Fooocus-based Inpainting: Applies inpainting techniques adapted from Fooocus (SDXL).
Final Assembly: Stitches the edited image back together.
I hope this helps anyone facing similar challenges. Feel free to modify and improve it!
Workflows:
This page contains three workflow variations:
SDXL: The primary workflow. Uses ControlNet for structure and Fooocus-based inpainting (In my opinion, offers the best balance of speed and quality).
Flux Fill: A workflow that uses the new Flux Fill model. Does not require ControlNet to my testing.
Flux Fill GGUF: Similar to Flux Fill but utilizes the GGUF model format for potential performance benefits.
Getting Started:
You'll need to install the following custom nodes and models:
1. Custom Nodes:
The necessary nodes can be found through the ComfyUI Manager. However, some users have reported installation issues regarding the fashion masking nodes. Here's a guide:
Nodes Repository: https://github.com/StartHua/Comfyui_segformer_b2_clothes
Installation:
Install the nodes via ComfyUI Manager.
Navigate to your ComfyUI custom nodes directory: \ComfyUI\custom_nodes\Comfyui_segformer_b2_clothes
Open a command prompt in that directory (you can type cmd in the folder path and press enter).
Run the following command: pip install -r requirements.txt
2. Segmentation Models:
You'll need the model files from Hugging Face (links below). These links only contain the files needed tu run the nodes, not the nodes themselves. Download the model.safetensor, preprocessor_config.json, and config.json files and place them in the following directories:
Segformer B2 Clothes:
Hugging Face Link: https://huggingface.co/mattmdjaga/segformer_b2_clothes
Place files in: \ComfyUI\models\segformer_b2_clothes
Segformer B3 Fashion:
Hugging Face Link: https://huggingface.co/sayeed99/segformer-b3-fashion
Place files in: \ComfyUI\models\segformer_b3_fashion
(The workflow includes a switch to select between these two segmentation models. They have different strengths and weaknesses, so try both if one doesn't work well. Remember to adjust the mask expansion as needed.)
3. Fooocus Inpaint Models:
Hugging Face Link: https://huggingface.co/lllyasviel/fooocus_inpaint (You need inpaint_v26.fooocus.patch, fooocus_lama.safetensors and fooocus_inpaint_head.pth from this repository, place them in ComfyUI\models\inpaint)
Feel free to ask if you have any questions. Happy inpainting!
Description
Adapted the SDXL worflow to the new Flux Fill model. Thanks to the good image understanding of the model itself, this workflow doesn't need Controlnet, and obviously i removed the Fooocus inpaint node.
Feel free to suggest any improvement, i'll try to apply it!
FAQ
Comments (14)
a question how do you install the fashion nodes?
i am getting faults when in stalling the fashion nodes they don't load and where do place the models
This is the nodes repo: https://github.com/StartHua/Comfyui_segformer_b2_clothes
- After installing them, go to:
\ComfyUI\custom_nodes\Comfyui_segformer_b2_clothes
-Type cmd on the folder path then press enter
-Type in the command prompt:
pip install -r requirements.txt
-From Huggingface (link in the post description), download the model.safetensor, the preprocessor_config.json and config.json files, and place them respectively in:
\ComfyUI\models\segformer_b2_clothes
and
\ComfyUI\models\segformer_b3_fashion
@Nopha_ thanks will try it when i get home
@Nopha_ I installed it now, but: What preview_mask node are you using? It is missing when I loaded the workflow and it is not showing up in the Manager. I used a different mask preview node and got this error: mpt: {'type': 'invalid_prompt', 'message': 'Cannot execute because a node is missing the class_type property.', 'details': "Node ID '#125'", 'extra_info': {}}
@Learning2023 https://github.com/antrobot1234/antrobots-comfyUI-nodepack this is the preview mask node
@Nopha_ Thanks a lot. It worked!
@Learning2023 Glad to help!
@Nopha_ Looks good, though, I can't seem to find the model you used in your workflow: agflux fill fp8 inpainting q4_k_s.gguf, is it different from the normal fill model? Is it this one: AGFlux_Fill_NSFW_fp8 - AGFlux_Fill_NSFW_v1.7_fp8 | Flux Checkpoint | Civitai
@Learning2023 It's just a GGUF version I quantized of https://civitai.com/models/978482/agfluxfillnsfwfp8 . I might upload it, but I'm not sure I have the permissions to do that. It's a Q4, so the results are not the best. I'm pretty sure there are better version of my quantization in here or huggingface though.
@Nopha_ I see. I'll use the original fp8 from AgFlux. Thanks again!
@Nopha_ hi finaly got arround to try it
had a hardware failure and had to wait for parts
now i am getting an error when trying to install it
warning: variable does not need to be mutable
--> tokenizers-lib\src\models\unigram\model.rs:265:21
|
265 | let mut target_node = &mut best_path_ends_at[key_pos];
| ----^^^^^^^^^^^
| |
| help: remove this mut
|
= note: #[warn(unused_mut)] on by default
warning: variable does not need to be mutable
--> tokenizers-lib\src\models\unigram\model.rs:282:21
|
282 | let mut target_node = &mut best_path_ends_at[starts_at + mblen];
| ----^^^^^^^^^^^
| |
| help: remove this mut
warning: variable does not need to be mutable
--> tokenizers-lib\src\pre_tokenizers\byte_level.rs:200:59
|
200 | encoding.process_tokens_with_offsets_mut(|(i, (token, mut offsets))| {
| ----^^^^^^^
| |
| help: remove this mut
error: casting &T to &mut T is undefined behavior, even if the reference is unused, consider instead using an UnsafeCell
--> tokenizers-lib\src\models\bpe\trainer.rs:526:47
|
522 | let w = &words[*i] as const as mut ;
| -------------------------------- casting happend here
...
526 | let word: &mut Word = &mut (*w);
| ^^^^^^^^^
|
= note: for more information, visit <https://doc.rust-lang.org/book/ch15-05-interior-mutability.html>
= note: #[deny(invalid_reference_casting)] on by default
warning: tokenizers (lib) generated 3 warnings
error: could not compile tokenizers (lib) due to 1 previous error; 3 warnings emitted
bunch of file paths
error: cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib -- failed with code 101
[end of output]
any idea how to get past this?
Already updated rust and visual studio but did not help
@xeonke844 Is your comfy working at all? i'm no expert but i don't think this issue is related to the workflow or the nodes used in it. It looks like the tokenizer library is having some issues. If you replaced important components in your pc probably you should reinstall from scratch
@Nopha_ got it to work.
had to manually create the model folders and place the models in it.
also manual install the preview node
yea my comfy is working but i suspect is has to do with the face that either i don't run it from the C drive or they have hardcoded reference to the windows temp folders on the c drive instead of using the system variables for those folders.
my user temp and system temp are on a there own dedicated drive via the windows settings ,not a sym link.
for instance facefusion is also hardcoded for the C drive but can create huge temp folders as it save every frame of a video as a png after each pass
@xeonke844 Glad you got it working!


