CivArchive
    [FLUX | SDXL] Auto clothes inpainting - FLUX FILL
    Preview 43084755
    Preview 43084761
    Preview 43084759

    Hey everyone! Newbie ComfyUI user here. I struggled to find a good inpainting workflow for automatically masking and changing clothes, so after a lot of trial and error, here’s what I came up with. It's not perfect, but it works surprisingly well for me, and hopefully, it’ll be useful to you too.

    This workflow focuses on making image editing a bit more streamlined. It uses automatic segmentation to identify and mask elements like clothing and fashion accessories. Then it uses ControlNet to maintain image structure and a custom inpainting technique (based on Fooocus inpaint) to seamlessly replace or modify parts of the image (in the SDXL version).

    Here’s a breakdown of the process:

    • Automatic Masking: Uses semantic segmentation to automatically create masks for clothes and fashion elements.

    • Image Preparation: Crops and prepares the image for editing.

    • Structure Preservation: Employs ControlNet to maintain image structure (in the SDXL version, Flux didn't need that in my testing).

    • Fooocus-based Inpainting: Applies inpainting techniques adapted from Fooocus (SDXL).

    • Final Assembly: Stitches the edited image back together.

    I hope this helps anyone facing similar challenges. Feel free to modify and improve it!

    Workflows:

    This page contains three workflow variations:

    1. SDXL: The primary workflow. Uses ControlNet for structure and Fooocus-based inpainting (In my opinion, offers the best balance of speed and quality).

    2. Flux Fill: A workflow that uses the new Flux Fill model. Does not require ControlNet to my testing.

    3. Flux Fill GGUF: Similar to Flux Fill but utilizes the GGUF model format for potential performance benefits.

    Getting Started:

    You'll need to install the following custom nodes and models:

    1. Custom Nodes:

    The necessary nodes can be found through the ComfyUI Manager. However, some users have reported installation issues regarding the fashion masking nodes. Here's a guide:

    • Nodes Repository: https://github.com/StartHua/Comfyui_segformer_b2_clothes

    • Installation:

      1. Install the nodes via ComfyUI Manager.

      2. Navigate to your ComfyUI custom nodes directory: \ComfyUI\custom_nodes\Comfyui_segformer_b2_clothes

      3. Open a command prompt in that directory (you can type cmd in the folder path and press enter).

      4. Run the following command: pip install -r requirements.txt

    2. Segmentation Models:

    You'll need the model files from Hugging Face (links below). These links only contain the files needed tu run the nodes, not the nodes themselves. Download the model.safetensor, preprocessor_config.json, and config.json files and place them in the following directories:

    3. Fooocus Inpaint Models:

    Feel free to ask if you have any questions. Happy inpainting!

    Description

    Adapted the SDXL worflow to the new Flux Fill model. Thanks to the good image understanding of the model itself, this workflow doesn't need Controlnet, and obviously i removed the Fooocus inpaint node.
    Feel free to suggest any improvement, i'll try to apply it!

    FAQ

    Comments (14)

    xeonke844Dec 2, 2024
    CivitAI

    a question how do you install the fashion nodes?
    i am getting faults when in stalling the fashion nodes they don't load and where do place the models

    Nopha_
    Author
    Dec 2, 2024· 2 reactions

    This is the nodes repo: https://github.com/StartHua/Comfyui_segformer_b2_clothes


    - After installing them, go to:

    \ComfyUI\custom_nodes\Comfyui_segformer_b2_clothes

    -Type cmd on the folder path then press enter

    -Type in the command prompt:

    pip install -r requirements.txt

    -From Huggingface (link in the post description), download the model.safetensor, the preprocessor_config.json and config.json files, and place them respectively in:

    \ComfyUI\models\segformer_b2_clothes

    and

    \ComfyUI\models\segformer_b3_fashion

    xeonke844Dec 5, 2024

    @Nopha_  thanks will try it when i get home

    Learning2025Dec 7, 2024

    @Nopha_ I installed it now, but: What preview_mask node are you using? It is missing when I loaded the workflow and it is not showing up in the Manager. I used a different mask preview node and got this error: mpt: {'type': 'invalid_prompt', 'message': 'Cannot execute because a node is missing the class_type property.', 'details': "Node ID '#125'", 'extra_info': {}}

    Nopha_
    Author
    Dec 7, 2024

    @Learning2023 https://github.com/antrobot1234/antrobots-comfyUI-nodepack this is the preview mask node

    Learning2025Dec 7, 2024

    @Nopha_ Thanks a lot. It worked!

    Nopha_
    Author
    Dec 7, 2024

    @Learning2023 Glad to help!

    Learning2025Dec 7, 2024

    @Nopha_ Looks good, though, I can't seem to find the model you used in your workflow: agflux fill fp8 inpainting q4_k_s.gguf, is it different from the normal fill model? Is it this one: AGFlux_Fill_NSFW_fp8 - AGFlux_Fill_NSFW_v1.7_fp8 | Flux Checkpoint | Civitai

    Nopha_
    Author
    Dec 7, 2024

    @Learning2023 It's just a GGUF version I quantized of https://civitai.com/models/978482/agfluxfillnsfwfp8 . I might upload it, but I'm not sure I have the permissions to do that. It's a Q4, so the results are not the best. I'm pretty sure there are better version of my quantization in here or huggingface though.

    Learning2025Dec 7, 2024· 1 reaction

    @Nopha_ I see. I'll use the original fp8 from AgFlux. Thanks again!

    xeonke844Jan 7, 2025

    @Nopha_  hi finaly got arround to try it
    had a hardware failure and had to wait for parts

    now i am getting an error when trying to install it
    warning: variable does not need to be mutable

    --> tokenizers-lib\src\models\unigram\model.rs:265:21

    |

    265 | let mut target_node = &mut best_path_ends_at[key_pos];

    | ----^^^^^^^^^^^

    | |

    | help: remove this mut

    |

    = note: #[warn(unused_mut)] on by default

    warning: variable does not need to be mutable

    --> tokenizers-lib\src\models\unigram\model.rs:282:21

    |

    282 | let mut target_node = &mut best_path_ends_at[starts_at + mblen];

    | ----^^^^^^^^^^^

    | |

    | help: remove this mut

    warning: variable does not need to be mutable

    --> tokenizers-lib\src\pre_tokenizers\byte_level.rs:200:59

    |

    200 | encoding.process_tokens_with_offsets_mut(|(i, (token, mut offsets))| {

    | ----^^^^^^^

    | |

    | help: remove this mut

    error: casting &T to &mut T is undefined behavior, even if the reference is unused, consider instead using an UnsafeCell

    --> tokenizers-lib\src\models\bpe\trainer.rs:526:47

    |

    522 | let w = &words[*i] as const as mut ;

    | -------------------------------- casting happend here

    ...

    526 | let word: &mut Word = &mut (*w);

    | ^^^^^^^^^

    |

    = note: for more information, visit <https://doc.rust-lang.org/book/ch15-05-interior-mutability.html>

    = note: #[deny(invalid_reference_casting)] on by default

    warning: tokenizers (lib) generated 3 warnings

    error: could not compile tokenizers (lib) due to 1 previous error; 3 warnings emitted

    bunch of file paths
    error: cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib -- failed with code 101

    [end of output]

    any idea how to get past this?
    Already updated rust and visual studio but did not help

    Nopha_
    Author
    Jan 7, 2025

    @xeonke844 Is your comfy working at all? i'm no expert but i don't think this issue is related to the workflow or the nodes used in it. It looks like the tokenizer library is having some issues. If you replaced important components in your pc probably you should reinstall from scratch

    xeonke844Jan 7, 2025

    @Nopha_ got it to work.
    had to manually create the model folders and place the models in it.
    also manual install the preview node

    yea my comfy is working but i suspect is has to do with the face that either i don't run it from the C drive or they have hardcoded reference to the windows temp folders on the c drive instead of using the system variables for those folders.
    my user temp and system temp are on a there own dedicated drive via the windows settings ,not a sym link.
    for instance facefusion is also hardcoded for the C drive but can create huge temp folders as it save every frame of a video as a png after each pass

    Nopha_
    Author
    Jan 7, 2025

    @xeonke844 Glad you got it working!

    Workflows
    Flux.1 D

    Details

    Downloads
    317
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/1/2024
    Updated
    5/15/2026
    Deleted
    -

    Files

    FLUXSDXLAutoClothes_fluxFILL.zip

    Mirrors