Converted Flux Fill to FP8 for all my low Vram Peeps.
Happy inpainting
Description
FAQ
Comments (54)
UNETLoader
Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([3072, 384]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
EDIT: Update your comfyui people...
I'm using Forge UI and getting a similar error (VAE state dict gives weight dimensions as 3072x384, but the number of in channels when the model is initialized is 64).
Updated and same Error.
I have encountered the same problem.
After the update, it can now run normally. Thank you for sharing
can you create GGUF 8 ?
For anyone looking for this: YarvixPA/FLUX.1-Fill-dev-gguf · Hugging Face 👍😁👍
Is GGUF better then just the FP8?
@runebloodstone Yes, generations with GGUF 8 are almost identical to FP16, whereas FP8 is less accurate.
Thank you for doing this. But in my case the model sometimes doesn't follow the prompt at all somehow. Is it because it's FP8?
tested this against SporkySporkness/FLUX.1-Fill-dev-GGUF quants and the differences are very subtle with the same seed. I would guess any flaws are in the original model (or maybe the frontend you're using).
There is less prompt adhernece cause it loses precison through quantisation
a non-NSFW model
stack a nsfw lora
can you please post the workflow for using this?
I tried using the flux fill template but I am seeing grainy pixels.
using the following models
clip_name1: clip_I.safetensors
clip_name2: 15xxl_fp8_e4m3fn.safetensors
unet name: fluxFillFP8_v10.safetensors
weight_dtype: fp8.e4m3fn
use T5XXLfp16 and update your comfy. Default fluxfill wf should work then
same here, I already update to the newest comfy, but I still get pixelated image, don't know why
I have the same thing. The image gets very muddy, consisting of dirty pixel mush. I used the standard fill workflow.
Can confirm that the problem is also present on the GGUF models (2 of the made gguf models have this).
@Dainzh That's right, I was just using gguf. I took Q4 and Q5 from here: https://huggingface.co/SporkySporkness/FLUX.1-Fill-dev-GGUF and they both have this problem. Is there a normal gguf without this problem?
Seems like not a gguf problem, or fp8 problem or any other problem model related, because I've just tested with regular fill dev mode which is 22 gb and it has the same problem. Something tells me this is due to the ComfyUI problem itself, and I don't really know what might be causing it.
@superuser111 Are you using any custom nodes by chance?
@Dainzh Very sad, hope the problem is resolved soon.
@Dainzh I used the standard workflow taken from here https://civitai.com/models/970162?modelVersionId=1088649, only replaced flux model node with gguf
anyone solved the noisy problem?
@superdumbkitten What GPU do you have?
Thanks OP it worked! For those who are struggling try using the image provided by OP for testing.
Also found this guide below which gives details on expected parameter values.
https://comfyui-wiki.com/en/tutorial/advanced/flux-fill-workflow-step-by-step-guide
@Dainzh 8gb 2080s
For thosee who still struggling for the noisy image, adjust the cfg to 7or8, the cfg is 1 in detault workflow.
SDXL outpainting fast and quick less nodes https://youtu.be/YNn-MUCRs60?si=yVtQR4eE-nRPC9xq
Does anyone have a workflow for using this model's outpainting feature?
Just use SDXL https://youtu.be/YNn-MUCRs60?si=yVtQR4eE-nRPC9xq
@1girls No thanks, it's like using a push-button phone instead of a smartphone. Yes, it works too, but it has been outdated.
https://www.youtube.com/watch?v=wvAFmYF7qgA
Workflow in the description. Includes outpainting and inpainting.
@13_february wrong concept.. flux is overhyped. technically you dont need it if you are good with the tools in hand
@junkyard001 loads of workflow nodes from third party repo is trash.. more nodes doesnt mean you are good at it.. i dont like youtubers workflows i make my own
@1girls Even so, I didn't ask for an outpaint workflow based on 1.5 model🤷 It cannot work out small details efficiently. Yes, it can expand an image with a simple background, but highly detailed images are not it's strong suit. Flux didn't get its overhype for nothing. Its main feature is high consistency in details. With it, you don't get a hodgepodge in details like in the 1.5 and SDXL models when you you work on high-resolution art illustrations.
must put it in unet folder
hi, what do you mean unet folder? isn't this suppose to go in /models/diffusion_models/ ??
Trying this out today after it downloads. Thanks!
why he is very slow ... 30steps 7 Min , 14.24s/it]
flux dev fp8 run 3x faster on my pc
Comfyui crashes immediately when it tries to load the model :/
Why does the grass give very unrealistic result?
you are a gift to the community for this!
Doesn't work in Forge for me.... :-(
working great, but i always get a little bit of de-contrast haziness on my outputs, with a bit of banding. any thoughts ? thanks !
Noob question: Which comfy ui node I should use to load this model?
Load Diffusion Model
you can try custom workflow from sebastian kamph ACE++ Character Consistency without training - Only 1 input image | Patreon
Great! Until Flux-Fill i had realised inpainting exclusively with fooocus.
Great
Does not work on Forge? "RuntimeError: mat1 and mat2 shapes cannot be multiplied"
Getting the same issue. Can't use ComfyUI as it shows the below error. I wonder if any tried this with WebForge NEO.
CLIPTextEncode CUDA error: no kernel image is available for execution on the device Search for `cudaErrorNoKernelImageForDevice'
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
