CivArchive
    Flux Inpaint and Outpaint Workflow - Flux Inpaint Outpaint
    Preview 38305269
    Preview 38305376
    Preview 38305270

    Notice

    • This workflow is not state of the art anymore, please refer to the Flux.1 Fill and the official comfyui workflows for your inpainting and outpainting needs. Details below:

    • Black forest labs has since released official flux 1 tools: https://blackforestlabs.ai/flux-1-tools/

    • The tools include the Flux.1 Fill [dev] model which is better than the alimama controlnet used in this workflow.

    • There is native comfyui support for the flux 1 tools: https://blog.comfy.org/day-1-support-for-flux-tools-in-comfyui/ which you should use for your inpainting and outpainting needs.

    • You can grab the official comfyui inpaint and outpaint workflows from: https://comfyanonymous.github.io/ComfyUI_examples/flux/ by downloading the images (workflow embedded within the images) and opening them in comfyui.

    • If you have low VRAM, you can download the fp8 version of the Flux.1 Fill [dev] model from here: https://civarchive.com/models/969431/flux-fill-fp8

    • This workflow will not be updated anymore because there is nothing I have to add to the official comfyui inpaint and outpaint workflows.

    Introduction

    The latest version of this workflow uses

    • alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta

    • alimama-creative/FLUX.1-Turbo-Alpha

    to achieve 8 steps inpainting and outpainting within the same workflow.

    Models

    1. FLUX.1-Turbo-Alpha.safetensors (models/loras/flux) https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha

    2. FLUX.1-dev-Controlnet-Inpainting-Beta-fp8.safetensors (models/controlnet) https://huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta

      Download diffusion_pytorch_model.safetensors, rename the file and use Kijai's script (https://huggingface.co/Kijai/flux-fp8/discussions/7#66ae0455a20def3de3c6d476) to convert to FP8 to be able to fit into 16GB VRAM.

    3. flux1-dev-fp8-e4m3fb.safetensors (models/diffusion_models/flux): https://huggingface.co/Kijai/flux-fp8/tree/main

    4. t5xxl_fp8_e4m3fn_scaled.safetensors (models/clip): https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main

    5. ViT-L-14-BEST-smooth-GmP-TE-only-HF-format.safetensors (models/clip): https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/tree/main

    Custom Nodes

    1. segment anything (if you have > 16GB VRAM and want to use automatic segmentation)

    2. Various ComfyUI Nodes by Type

    3. KJNodes for ComfyU

    Description

    This version uses alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Beta and alimama-creative/FLUX.1-Turbo-Alpha for 8 step flux inpainting and outpainting within the same workflow.

    FAQ

    Comments (25)

    superuser111Nov 4, 2024· 1 reaction
    CivitAI

    Just what I was looking for! Thank you!

    7sanal7sanNov 9, 2024· 1 reaction
    CivitAI

    till now this is my Fav outpaint EvEr!!!!!

    alexmihaic522Nov 12, 2024· 1 reaction
    CivitAI

    It´s very good, thanks!

    diogodNov 21, 2024· 14 reactions
    CivitAI

    You are not compositing the image in the end which for an inpainting workflow is very wrong. VEA encoding and decoding degrade the original not inpainted image. Look at my workflow to see how to proper implement it.

    PixelMuseAI
    Author
    Nov 21, 2024· 5 reactions

    Not sure if you have noticed that this is a controlnet workflow and does not VAE encode the original image.

    Looked at your workflow and if I were doing pure inpainting, I would use the crop and stitch nodes instead:
    https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch

    However, this workflow implements out-painting as well so there would be no savings in terms of a smaller image for sampling.

    diogodNov 21, 2024· 1 reaction

    @PixelMuseAI I didn't know that alimamama controlenetinpainting would be sufficient to carry the image to sampler. Anyway, there is no way to do that without VEA encoding, that is why it receives the VEA. I did not study this specific node enough, so I don't know if it has an internal "crop and stitch" somehow, but I doubt since the output of the sampler is a latent. You would still be VEA encoding and decoding. without a proper composite in the end, you will degrade the whole image.
    The outpanting is another thing. It makes compositing harder, but not impossible.
    I do use crop and stitch in my workflow.

    diogodNov 21, 2024

    @PixelMuseAI Just excluded the outpainting part, of your worflow, this is how my image (not inpainted area) looks like after 5 consecutive inpaintings https://civitai.com/images/41321523

    You need composite after the inpainting decoding or this will happen

    PixelMuseAI
    Author
    Nov 21, 2024

    @diogod 
    1) Its a Variational Auto-Encoder, a VAE, not a VEA.
    2) please take a look at my workflow, I am using an empty latent image.

    3) I did not say that I use the crop and stitch node, I said that I would have used them instead of the way you did it in your workflow, if my workflow was a pure inpainting workflow (which it is not).

    4) My workflow is not a pure inpainting workflow, please take a look at the title of the workflow. This is what makes the workflow unique. It can do both inpainting and outpainting within a single KSampler pass. I definitely do not want to make my workflow only do inpainting because that defeats the point of my workflow.

    5) I did not say that outpainting makes the compositing any harder. One of the main benefits of using crop and stitch is that the sampling is faster as you are sampling less pixels but this benefit disappears when you are outpainting.

    diogodNov 21, 2024· 1 reaction

    @PixelMuseAI 1) sure, Yes it's VAE my bad. I change letters easily.
    2) I have checked your workflow, or else I would not be commenting on it...
    3) I DID use "crop and stitch", it's literally an option in my workflow. But my workflow doesn't matter, it was just an example. "Crop and stitch" is the same thing as "compositing the image in the end", which is something you NEED to do. Either manually (with separated nodes or with crop and stitch). It doesn't matter if you are doing outpainting or inpainting, if you don't do the composite (stitch the new pixels to the old pixels) the image will have degradation as I showed you in my example in your gallery.
    4) You can keep your outpaininting. But you need to composite in the end, it is simple as that or else you will loose on the original NOT painted area as I showed you in the example.
    5) You did not say that, I said that. Outpainting makes it a little harder to composite the outpainted result area and the inpainted area in the end, and NO you don't need to use "crop and stitch node". But you can. I don't know why you keep bringing this up.

    Did you see how bad the image gets after inpainitng a bunch of times? That is what your worflow is doing. That was just a proof of concept that, yes, you are using VAE enconding and decoding.

    PixelMuseAI
    Author
    Nov 21, 2024

    @diogod Please take a look at the https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch nodes. You did not use these nodes. I said that I would use this set of nodes if I were doing an inpainting only workflow, rather than the way you did it. This set of nodes allows you to (as I quote from the author's github page):

    "✂️ Resize Image Before Inpainting" is a node that resizes an image before inpainting, for example to upscale it to keep more detail than in the original image.

    And no, outpainting does not make it harder to composite, you are still using the mask to transfer the new pixels onto the old image, just that you need to use the padded image to composite onto.

    And no, if you look at my workflow,

    1) I did not VAE encode the original image

    2) I am using an empty latent image

    3) I am using a controlnet for inpainting, this is different from the technique to VAE encode the original image for inpainting.

    If you are making comments like what you are doing, it would be nice if your wording is more accurate, to benefit the users who come to the page and read your comments.

    diogodNov 22, 2024· 1 reaction

    @PixelMuseAI 

    Again, I use that node since v1.0, you just didn't look. it's in the group called "area localized inpainting". It's optional because if I'm inpainting the full image what this node does is very simple to replicate with "Image Composite From Mask". If I want to do an area with context, than this node is great and very helpful. Again, I don't know why you keep bringing it up since it's pretty obvious that I know what it does and so do you. What apparently you keep not understanding is that you need to composite the image in the end. THAT node does exactly that in the "stitch" part.

    I'm trying to help you to make a simple improvement and not degrade your image, I showed you proof that your workflow IS DEGRADING the image. How else do you explain the image I showed you?

    1) yes, you do VAE encoding, right there on the alimamama controlnet. That is why it requires the vae.


    2) It doesn't matter. Since the alimama controlnet apply node carries the image to the sampler, or else how would the original parts of your image be presented in the end?. In any way, this is not important. What is important is that you did not use a composite in the end, and therefore the parts of your image that were not inpainted ARE degraded. This is a simple fact. The model works with latent and there is no way it keeps your original image if not on the latent form. Whenever you simply decode the latent there is a loss. It's a lossy process. Even if by magic you did not encode the pixel image, when you use the decode it is lossy.

    Unless you are doing a whole new img2img image, in which case, this would not be inpainting or outpaining, you always need to composite in the end.

    3) it is not. The control-net only helps whatever the sampler already does. It makes no sense to think you don't need to composite in the end. Again, how do you explain my degraded image I showed you using your worflow?


    I'm trying to be as accurate as possible. I even showed you proof. I'm not trying to fight you, this is nonsensical. It's a simple flaw that pretty much evryone does. Even confyui developers keep doing it, as I pointed out here: https://www.reddit.com/r/StableDiffusion/comments/1gwibxr/comment/ly9p37z/

    diogodNov 22, 2024

    Or here https://www.reddit.com/r/StableDiffusion/comments/1gwilop/comment/ly9x5k5/, where mcmonkey4eva, you know, the main developer of swarmUI, agrees with me that this IS NEEDED.

    PixelMuseAI
    Author
    Nov 22, 2024· 2 reactions

    @diogod please look at the official ComfyUI inpaint and outpaint workflows for the new flux tools. Do you see a composite? No.

    If you use regional prompting, you will never need to do multiple inpaint.

    diogodNov 23, 2024· 2 reactions

    @PixelMuseAI Yes, I've seen them and they are wrong.
    Again, please answer me, how do you explain this degradation here: https://civitai.com/images/41321523 (his face was never inpainted, only the shirt)

    PixelMuseAI
    Author
    Nov 23, 2024· 4 reactions

    @diogod if you think the way you're doing things is more correct than the official ComfyUI example, then you can go ahead with your thinking.

    If you think that you need to do in-painting multiple times, which is less efficient than doing everything in one pass, then you can go ahead to composite.

    For the rest of us, there are other better and more efficient ways to do things. Single pass, regional prompting.

    diogodNov 23, 2024· 7 reactions

    @PixelMuseAI gosh you must be the most stubborn person i've ever met. You must think you can't possible ever make a mistake, right?
    Keep doing your workflow wrong then. I don't care. Whatever. Jesus. AT this point, I don't think I'm talking to a rational person anymore. Good luck and goodbye.

    PixelMuseAI
    Author
    Nov 23, 2024· 2 reactions

    @diogod There are many types of people on this planet, all with different ways of doing the same thing.

    You might not agree with the official ComfyUI implementation of in-painting/outpainting but that doesn't make it is wrong.

    You have your way of doing multiple in-painting. But you cannot overlook other more efficient ways (regional prompting) of achieving the same outcome.

    Methods that doesn't agree with the way you do things can be rationale as well.

    rzyuaNov 24, 2024· 1 reaction

    @PixelMuseAI They are right, though. Your workflow (as well as the official one) is destructive to the original image, which makes multiple inpaints impossible or at the very least impractical. Multiple inpaints are a normal workflow for many use cases and regional prompting is not a replacement for that.

    PixelMuseAI
    Author
    Nov 24, 2024· 1 reaction

    @rzyua if you see my very first comment, i explained that I would use the crop and stitch nodes if i was doing pure inpainting. if you need to do multiple inpainting, you can use a similar method.

    i have put up a notice on the description linking to the official comfyui flux 1 fill workflow and stating that i will not be updating this workflow anymore. since the flux 1 fill is state of the art (as show in the numbers by black forest labs) and i have nothing to add to the official comfyui workflows. if you or any users feel that the official workflow is wrong, i would recommend that you contact comfyui to make changes to their workflow.

    or you are welcomed to take a look at the black forest labs implementation as well and maybe contact them to change it if there is anything you find incorrect. https://github.com/black-forest-labs/flux/blob/main/src/flux/cli_fill.py

    SomebodySeriousJan 1, 2025

    @PixelMuseAI wow @diogod i admire your effort @rzyua they are right, but hey..

    AgeOfAlgorithmsNov 23, 2024
    CivitAI

    It just crashes every time on my RTX 3090.

    "Requested to load FluxClipModel_

    Loading 1 new model

    loaded completely 0.0 4903.231597900391 True

    model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16

    model_type FLUX

    Killed"

    PixelMuseAI
    Author
    Nov 24, 2024· 2 reactions

    i am on a 40 series card. but if i am not wrong, the 30 series cards do not support fp8. you can try to change the weight dtype to fp16, since you are on a 3090 which had more VRAM. i am not sure if that would solve the problem.

    however, i would suggest you check out the latest flux 1 tools, instead of using this workflow.

    Black forest labs has since released official flux 1 tools: https://blackforestlabs.ai/flux-1-tools/

    The tools include the Flux.1 Fill [dev] model which is better than the alimama controlnet used in this workflow.

    There is native comfyui support for the flux 1 tools: https://blog.comfy.org/day-1-support-for-flux-tools-in-comfyui/ which you should use for your inpainting and outpainting needs.

    You can grab the official comfyui inpaint and outpaint workflows from: https://comfyanonymous.github.io/ComfyUI_examples/flux/ by downloading the images (workflow embedded within the images) and opening them in comfyui.

    If you have low VRAM, you can download the fp8 version of the Flux.1 Fill [dev] model from here: https://civitai.com/models/969431/flux-fill-fp8

    AgeOfAlgorithmsDec 31, 2024

    @PixelMuseAI thanks so much! I'll check out flux1fill

    auroch22934Mar 24, 2025
    CivitAI

    Hello, is there a way to change the size of the final image? Currently it's adding unwanted info on the sides. Thanks

    ps1copatoMay 23, 2025
    CivitAI

    Thanks for the SDXL workflow. I was looking for something similar for days and found it on reddit. It's pretty solid and it's the best outpainting that i've seen so far!

    Workflows
    Flux.1 D

    Details

    Downloads
    6,766
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/4/2024
    Updated
    5/13/2026
    Deleted
    -

    Files

    fluxInpaintAnd_fluxInpaintOutpaint.zip

    Mirrors