Please check out the Quickstart Guide to Flux for all the info you need to get started!
FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.
Key Features
Cutting-edge output quality, second only to our state-of-the-art model
FLUX.1 [pro].Competitive prompt following, matching the performance of closed source alternatives .
Trained using guidance distillation, making
FLUX.1 [dev]more efficient.Open weights to drive new scientific research, and empower artists to develop innovative workflows.
Generated outputs can be used for personal, scientific, and commercial purposes as described in the flux-1-dev-non-commercial-license.
Usage
We provide a reference implementation of FLUX.1 [dev], as well as sampling code, in a dedicated github repository. Developers and creatives looking to build on top of FLUX.1 [dev] are encouraged to use this as a starting point.
Learn More Here:
https://huggingface.co/black-forest-labs/FLUX.1-dev
Description
Pro 1.1 Ultra is not available for download and is accessible via API generation only! Download is disabled.
FAQ
Comments (282)
Showing latest 238 of 282.
Yeeea
great
Is this can be used with flux1d LoRas? I cant find flux 1d base checkpoint for SD
this is the flux checkpoints. you have schnell or dev to pick from. the rest are ran via api and don't use loras.
@TheP3NGU1N Oh, thanks! I'm stupd some time ^^'
which model should I choose? RTX 4070 (8 GB) 16 RAM
FP8 probably.
@Seunae What about on a 4080 with 16gb vram and 64GB ram?
@JohnnyKnoxville you can try the higher option, but idk if you will get that much value, depends on you.
Can someone explain to me how I can run FLUX locally on A1111 ???? I've seen videos and reddits and they say it only runs on SD Forge UI..
Forge is a updated version (aka fork) of A1111. That's why it works with Forge.
So you need Forge to start with. Then you will need the clips (clip_l & t5). The Flux model of your choice and , optionally, the flux vae. Put them in their correct folders. Boot up Forge. Select the Flux option. Use it more or less normally like you would have on A1111.
As far as I know, A1111 isn't supported. Switch to Forge! You won't regret it. It looks the same, all the extensions work and it is much faster! :)
imo, I like ReForge UI over Forge. Reforge has the capability of using SVD, one of the reasons I switched.
A1111... who still uses that seriously ? Oh! you. Wow, Ok. Switch to Forge mate ;)
And just an advice: you should consider pinokio. I read many times on Reddit that it was great. But I prefer manual installs so it took me a while to finally try it. And wow ! Those 1 click installs are just perfect. Trellis, Forge, ComfyUI, and tons of other AI tools I didn't know about. Very nice little app.
Anyway, good luck.
@hansolocambo huh, i just start use A1111, and now u say reinstall this?
So, may it be update or I must reinstall all my models too
@VoronaDragon keep the models, you don't need to re-download them. Just plop em in the new Checkpoint folder for Forge
@hansolocambo I was still using it as I had no idea about forge. Making the switch thanks to your guys comments. Much appreciated.
Stability Matrix is good way to get Flux running. Easy to install and I'm generating Flux dev locally now using Forge. Some good guides online for it
Hello
how do I get her not to generate anime?
add terms like, photography, realism, realistic, photorealistic and so on for realism.
try it how prompt "minimal/no backdrop"
When I want like a realistic photograph I like to start my prompts with "A realistic and detailed photograph of .... "
Except for the prompt, lowering the Distilled CFG helps as well. Usually 2.5 to 3 is great for photography.
definitely worth the effort, great results
Guessing you mean fp8 version? As far as I'm aware there isn't a 'official' version of that made by Black Forest, so you will not find it listed here (tho I bet it is hosted somewhere in civitai unofficially). If you want the original check out: https://huggingface.co/Kijai/flux-fp8 -- it has the checkpoint and the other goodies you'll need.
Protip; if you have a lot of normal ram, load the fp16 clips and vae into it to increase quality without using up your vram and getting OOM errors.
Hi guys, the flux model will not run on my stable diffusion, it just shows a rough image after I click generate and goes blank. Any help will be appreciated. I downloaded the full version.
Which webui are you using? People often say Stable Diffusion when they actually mean Automatic1111, if so look into using Forge, its a fork of Automatic1111 with new features and options, including Flux support.
https://github.com/lllyasviel/stable-diffusion-webui-forge
It's a easy install and you can either hyper-link, direct to or simply move over all your checkpoints/loras and other downloaded resources to the new webui pretty easily.
@TheP3NGU1N Are you on the static branch or latest? Might wanna specify which fork of Forge.
There's something wrong with the Flux files currently published here. I can't generate images with Forge, and even if I can, it takes an incredibly long time to generate one. I can't delete the Flux files either.
length of generation will depend on how much vram you have. you may want to try a quantified version or fp8, see if that helps.
@TheP3NGU1N
PC1: VRAM - 16GB
PC2: VRAM - 48GB
I have two machines, but neither of them works.
@TheP3NGU1N
There is only the fp32 version here.
@MW4Uf81Dpvv5tIn 你的采样器有更改吗
@Zl88
I've tried different samplers, but every time I change them it spits out weird images.
@MW4Uf81Dpvv5tIn what webui are you using?
@TheP3NGU1N
I'm using Forge.
@MW4Uf81Dpvv5tIn This is what i tell people what to do when they want to start with flux so see if it might work for you.
First; double check your webui is fully updated. You'd be surprised how many people don't do that so I make sure to say it every time.
Select flux in the upper right (UI) in Forge to auto-set the options.
On the right of forge where it say Swap Location, switch to cpu if it isn't already, next to the at GPU weights you can tell it how much vram to use, setting it to about 10% less from max to avoid OOM seems to work, you may need to go a little lower depending on how much vram you do or don't have. Flux will kind of yell at you about it in console if it thinks you are using too much or too little.
For Vae/text encoder Make sure you are using one of the t5xxl_fp16 or t5xxl_fp8_e4m3fn clips with clip_l (https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main) & official flux vae (https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main)
For just testing keep things on default which is euler, simple, 20 steps, no hi-res, no refiner, 896x1152, distilled cfg 3.5 and cfg 1. Never change the normal cfg, it isn't used with flux. Only change the distilled.
uncheck everything else if it's selected (like freeu) don't need the extra bells and whistles.
make yourself a prompt, hit generate AND WAIT. the first load of flux will be the slowest as it loads everything into memory.
Flux is a big boy, short of having something like a h100 it's going to take a couple mins to generate it's image. Things like having a SSD for your models and a crap-ton of normal ram for you can offload things like the clips and vae there instead of vram helps speed it up.
If it's too slow for your liking, look into using the quantified models. Tons of them available here on civitai. The trade off is a little less prompt understanding and depending on which Quantified model, a small to large reduction in quality.. but at the joy of creating the image much much faster.
@TheP3NGU1N Amazing guide, thanks.
@TheP3NGU1N Can you explain to me why a picture with a face looks great until the last moment and then BAM! Ugly. Even the beautiful lighting effect on the face is no longer there at the end. It was already there, where did it go?
@TheP3NGU1N
I appreciate the advice, but after considerable trial and error I can't do it.
Even on a top of the line PC it is impossible to generate in ai.
@MW4Uf81Dpvv5tIn I suspect it isn't the tech and equipment fault at this point lol.
hi
do i need this flux with my rtx 2050 nvidia 4gb vram 30gb ram, or another one ? if yes which one please
you will want a quantified version and patients.
Im havin 4070 with 8gb vram and crying... for slowness (1day for 1generation😒)
amazing!
hi, even downloading the fp32 version, everything crashes. I don't know what to do :(
download the f16 version and try it again:
Precision Levels
FP32: 32-bit floating point numbers
Highest precisionUses more memory and computational resourcesStandard format for training models
FP16: 16-bit floating point numbers
Half the precision of FP32Reduces memory usage by 50%Faster computation speedSmall potential loss in accuracy
FP8: 8-bit floating point numbers
Quarter the precision of FP16Significantly reduced memory footprintMuch faster inferenceGreater potential for accuracy loss
FP4: 4-bit floating point numbers
Extremely compressed representationSmallest memory footprintFastest computationHighest risk of accuracy degradation
major update addressing security issue in forge ui dropped, and many files were deleted, and modified. thus if you were using flux samplers like flux realistic and its colleagues, they may have been gone as well as google blockly file.
if you want to avoid that, dont update user.bat ( at ya own risk, remove the "git pull" argument in your user.bat files..not recommended at all but you'll still have that old version of forge ui and the integrated forge ui samplers ).
also : if you used the "git pull" line in your arguments to load and update automatically forge, and have a google blockly loading error, use this command in the arguments as well : --skip-google-blockly
still awaiting for further news regarding the forge issue as the 14/01/2025 date of this post.
Ohh, thanks for the heads up! So that's why these samplers vanished into thin air...
thanks for letting us know,
More info can be found here https://github.com/lllyasviel/stable-diffusion-webui-forge/pull/2151
Everyone would be wise you actually read what is going on. Especially when talking about a person who has single handedly has made it possible for AI to be available to the masses more so than most of the AI community combined
This whole thing is more about clout and not about security.
Why does the end result look so shitty even though you have a really good face in the editing process right up to the last second?
I use ae.safetensor as VAE
and the following Settings:
Steps: 40
Sampler: Euler or DPM++ 2M
Schedule type: Simple or Beta
CFG scale: 1
Distilled CFG Scale: 2.5,
Face restoration: CodeFormer Size: 896x1152 Model hash: 06f96f89f6 Model: flux_dev Lora hashes: “AlphaN: 7ebbc2fa0f22” Discard penultimate sigma: True Version: f2.0.1v1.10.1-previous-635-gf5330788 Module 1: ae Module 2: clip_l Module 3: t5xxl_fp8_e4m3fn
Using Forge.
Any Ideas?
It's the face restoration probably. You don't really need it with flux 95% of the time. Either skip that or reduce it's strength significantly. A 3.5-ish distilled cfg can't hurt either if your goal is realism.
@TheP3NGU1N That was extremely helpful! Thank you very much! I had the problem for about 2 years. At that time still with SD1.5 models. And nobody could help me! Nice! Thank U!!!
steps too high
@Celinextopila no, not really a thing with flux, it's just more waiting for usually little to no change in the image after a certain amount. 36-42 is a good area for really crisp realism. Cartoon can be lower.
But there are people who do like 100+, which imo, is a waste of time.
cant get this to work as a load checkpoint
You have to put the files in your Unet folder and it requires a special unet/flux checkpoint loader.
model looks great... but, not for download? when download? ever?
You can only download Dev and Schell.
Why pro 1.1 ultra isn't available but when clicked to download, seems to download a ZIP training data?
Because Pro, Pro 1.1 and Ultra are API access only. There is no actual model to be downloaded, those can only be use via the onsite generator.
You can only download Dev and Schell.
Hey there , can you please help me understand how to use this,is not like when i download the other checkpoints and just add them into the specific folder when i unzip the folder it only gives me an image,how should i proceed with that?
You can only download Dev and Schnell. The rest are API only, eg, can only be used via the online generator.
This should help get you started I think.
(With thanks to @tingtingin for making the video guide)
Flux - How To Install And Use Flux Forge And ComfyUI (Nf4 Fp8 Schnell F16 Dev Explained):
https://www.youtube.com/watch?v=UyNJ-UFY-5k
When starting the generation, it returns a TypeError error: 'NoneType' object is not iterable. Please help me! What to do?
seems like it does not work at all im just getting errors all the time.
more info needed.
Don't forget to download the VAE and CLIP
@Hyphona And where do we get the VAE and Clip from???
@KrimsonRaider Here https://education.civitai.com/quickstart-guide-to-flux-1/#:~:text=the%20following%20components%3A-,Original/Official%20Flux%20Models,-The%20following%20models (ae.safetensors ; Clip_l.safetensors ; and then one of t5xxl_fp16.safetensors or t5xxl_fp8_e4m3fn.safetensors depending on your VRAM)
Did the fp16 version disappear from the site?
Hello. So Flux dev won't run on my Stability Matrix
If you are using Stability Matrix's built in wanna-be comfyui webui I believe it doesn't support Flux. You will have to install the real comfyui on Stability Matrix and get it set-up to work on that. Or use ReForge if comfyui isn't your thing.
@TheP3NGU1N Oh so that's the case. Thank you.
Seeing the downloads disabled in the Pro versions of this model, makes me wishes of having the option of just having the downloads disabled in any model type that can be used in the on-site image generator (Checkpoint, Lora, VAE, etc).
The option of archiving a model exists, but it disables the downloads and usage in the generator.
When can we use it on Automtic1111?
Probably never. Use Forge, it's basically A1111 with updated features.
so why can i no longer select the model mode?
Awesome
can someone help me out? i got this very flux, and in VAE/text encoder i got clip_i, t5xxl_fp16 and ae (that's the whole name of the vae) but my results are.. black squares
Which webui are you using?
Is it fully updated?
Does your console say any errors?
where do i install this?
Whats the difference between the 15gb and 22gb flux. The 22gb crashes my PC to blue screen while the 15g works fine for me
tldr; image quality and prompt adherence but the other ones are still very high quality and have great prompt adherence so using them is nearly just as good. Never quite heard of it giving blue screens of death but its probably because you are running out of vram.
You are probably running out of virtual memory. Windows is trash anyways.
@liquidbeef windows has nothing to do with vram
@TheP3NGU1N The option "Let windows handle the amount of virtual memory" has nothing to do with virtual memory? VRAM is not virtual memory by the way. Thanks for trying to correct me.
@liquidbeef ahahaha. okay. :thumbs up:
Where and how is this installed? I keep getting an error saying "You do not have CLIP state dict!" and I don't understand how to fix do that. Some information on how to use this in the description, would be very beneficial and appreciated.
sound like you are missing your clip files or they are not in your clip folder (under models)
find the clips here if you need to download them: https://education.civitai.com/quickstart-guide-to-flux-1/#:~:text=the%20following%20components%3A-,Original/Official%20Flux%20Models,-The%20following%20models
as for a install guide, head to YouTube, tons of tutorials.
you need ae.safetensors, clip_l.safetensors, and t5xxl_fp16.safetensors all selected in the VAE dropdown.
Great work,thx for share.
Can I use this in fooocus base model? or where would it go?
Foocus doesn't have and probably never will get flux support. Suggest giving Forge, Comfyui or SwarmUI a try. They do support it.
Replace Fooocus with SDXL2, where the interface is similar to Fooocus.
Great
Using comfyui i put this into the checkpoint folder like i have with all others and i get an error, is there a step by step guide to get this to work?
from the checkpoints folder to the unet folder
needs to be in your unet folder. don't forget to put your clips in your clip folder also.
@TheP3NGU1N So let's say i have a fresh download of comfy ui, i can't just download this and make it work like i did with ponyxl?
@Khronics it has certain required files: model, the clips, the vae and they need to be stored in their correct folders. on top of that you need to be using the correct nodes, eg; a unet loader & a duel clip loader.
So in a nutshell, no.
Forge would be more "download and use" than comfyui but you still will need to make sure the files in the correct folders and the correct setting are selected.
@TheP3NGU1N I see thanks, comfyui was the first program i found and it seemed relatively easy with just downloading checkpoints and loras lol.
@Khronics comfy is what the "pros" use. It has a decent learning curve but if you want the newest, advanced, best "toys" comfy is where it's at. So it's well worth figuring it out.
If you stick with it, I suggest downloading other people's workflows that have what you need, pick them part, learn what does what. Then work on making your own workflows.
@TheP3NGU1N Thanks, will do that, didn't even think about downloading premade workflows.
I think it needs to be the load stable diffusion model node instead of load checkpoint node
@TheP3NGU1N thanks man. your words made me continue to this madness. i'll try to figure it out
Great work!
Not Working Fooocus ai 😭😭😭😭😭😡😡
Learn comfyui, it's the best and much more flexible with workflows and simple tutorials on YT, trust me it's the best, ive used fooocus and a1111, and neither is better than comfyui.
If it helps, check my profile for my Hunyuan Comfyui generation and give me a buzz.
Did you find an alternative model?
Try SimpleSDXL2, which is an alternative to Fooocus, both SDXL and Flux work on it.
@vdemon What is SDXL2 and where can I get it?
@OREALS A1111 is obsolete. I use Forge WebUI. It has a similar interface to A1111, but it works much faster and better, and also supports Flux.
@Akalabeth https://www.youtube.com/watch?v=QhBj3d0Yfb0 (video for install and link)
@vdemon Thanks.
Is this still the best model out there? I have been away for several months....
Hunyuan might be of interest. It's a video generator, but you can set it to a single video frame generation (essentially an image generator). It does realism very well, & it's completely uncensored.
If you like realism, then most would probably say yes. If you're into illustration/cartoons/anime --- probably not what most would call Best. Illustrious or Pony/SDXL would probably hold that slot.
If you're into XXX content however, beware, Flux fights you on it. You have to use loras to get around it.
One question please. Is it pruned version Flux same as Flux-fp8? Thanks
No.
FP8 is the least precise version. FP32 is the optimized (simplified) version of the best one.
Is this going to run on Easy Diffusion?
The creator of ED has discontinued any update for it a very long time ago, so probably not. Unless there is a branch of it that people are updating but I haven't heard anything about that so guessing probably not.
Hi, it doesn’t works for me on iOS Draw thing ☹️ no generate nothing, Someone has the same problem?
Iso draw thing? uhh... unless its a small handful of webui's its probably not supported.
Ah so both of these are unet files and not checkpoint files ? Why then is there a difference in size when both seem to be fp32 ? I always believed the 'pruned' versions are supposed to be loaded as a checkpoint.
More to the point, why are they listed as checkpoint when they are in fact not checkpoint. I've wasted hours downloading flux files claiming to be checkpoints. So irritating.
oh ty!! I had it in the checkpoint and couldn't figure out what was wrong thanks for that info
how to use FLUX.dev in krita
Flux is brilliant, managed to get some absolutely stunning images using this 👌
Whats the difference between pruned and full?
A "full" model typically includes all the data generated during the training process, including information that might be used for further training or fine-tuning.
A "pruned" model has had unnecessary data removed, leaving only the essential information needed for generating images.
@selfmoort to me work only pruned version. full version in comfyui is not loaded
is this uncensored or censored version? do i need only this file to run or is there more files need to download?
Semi-censored. You can get it to do breasts and butts, but no lower frontal nudity and no sexual content.
However... all of that is easily fixed by using loras.
@TheP3NGU1N thanks
I have a 4090 and someway my pc stats go to max, freezing everything. I am using 1200px x 800px, Euler Beta, 25 steps, 3.5 Distilled CFG, ae, clip1, t5xxl 16, Diffusion in low bits is on automatic (Itried others), swap method: queue, swap location: CPU, 23000 of GPU... what else?
Even on a 4090 you don't have enough VRAM to run the model w/o some modifications. I have a 4090 too, but I quantize the model and T5ENCODER down to int8 using quanto. You can d/l and use GGUF or other smaller models w/o losing too much quality. With these changes you won't need to offload the model to cpu.
I run this model on my 4070 TI Super (16 GB VRAM) using swap method on PC. It works on Forge just time. I don't even have to mention the VAE/etc. Disable any hires stuff and see if basic image generation works on a 1000x2000.
try the fp8 text encoder, also it doesn't make sense to me. I use to do the exact same as you on my 4080 (16 gb vram), and it would generate this in about 1 minute, now it's buttery smooth on my 4090. Check your gpu performance on the task manager of your pc, there could be some apps drawing gpu power
Also watch out for your desktop resolution... stay safe with full hd or lower ;) I have a rtx 3070 but with 96gb ram so with Forge, all goes to ram (ddr4)... but it's super slow ^^ (and when I run at native resolution 3440x1440 it's much slower than full hd)
which text encoder this version needs?
the base model uses this: openai/clip-vit-large-patch14
but many people opt for this: zer0int/CLIP-GmP-ViT-L-14
the choice is yours.
Best AI generator far away
Flux dev and schnell models error when downloaded and run locally. Had to download from huggingface instead.
Are you with SD? I also have problems with Flux, is generating grey images, maybe your trick will solve my problem also. TY
Same issue here.
Is there some reason why FLUX cannot imagine a Wyvern? No matter what I do, I get a 4 legged Dragon.
The reason is a lack of Wyverns in the training data. A Lora or IpAdapter (can function like a 1 image lora) would probably be the best solution.
why full model can't be load in comfyui? The pruned model is loaded and work well
It works just fine.
same problem here!
same here. maybe full reinstall is needed.
where can i download the fp16 version?
Really perfect
perfect is an absolute. it can not be added to.
Hi, guys. I'm little bit stupid, but can i get Pro1.1 for local? or it's only web? Sorry, and thanks
Web only, paid only too.
need fp8 for my 3060 ... where can I get it ?
How do I add to swarm? Just says it needs an api key.
are the released weights for download in fp8 or fp16? it's definitely not fp32 because if you compare the size against https://huggingface.co/Comfy-Org/flux1-dev/tree/main
you'll see they're almost the same
It's fp32 and fp8
I cannot get this to load into stability matrix, i have downloaded it/imported it and it never shows up. Can anyone help?
I'm completely confused.
The one with 22 GB model "weight: torch.bfloat16, manual fill: torch.float16 Model type: FLUX"
And the one with 15GB @model weight dtype torch.float8_e4m3fn, manual cast: torch.float16
model_type FLUX"
in the loader weight-default
it's not even FP16, but BF16
Can someone tell me where to find a pure FP16?
Hmmm, unfortunately doesn't work for me :(
I've figured it out, finally... still not what i'm looking for...
Can I use this generated images as a commercial propose, selling or rent?
no, only flux schnell can be used commercially according to BFL
@andux then it turns out that the model is virtually useless, too bad.
I noticed that my images always come out blurry whenever I use CFG above 1. Can someone help out? Others seems to be having no issues with higher CFGs. I am using Forge and installed the required VAE/clip/txt encoder files.
CFG must remain at 1. For Flux you have to use the "distilled CFG". I'd suggest 3.5!
Have you tried more steps? From 35
@nexrenders Sorry for the late reply as I figured the issue out myself eventually. But you were spot on, so thank you either way!
Still top notch..
My Fav
CLIPTextEncode
ERROR: clip input is invalid: None If the clip is from a checkpoint loader node your checkpoint does not contain a valid clip or text encoder model
please help
You need to download, install and connect: ae.safetensors, clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors
best model
nice
SOS Why it does not work,please help. :( The above display:AssertionError: You do not have CLIP state dict!
You need to download safetensors CLIP, AE, T5XXL and place them in the corresponding folders.
https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050
Can someone explain to me what the licence means
FLUX.1 [dev] Non-Commercial Licence?
Does it mean that the created images can't be posted anywhere at all, or can't they just be sold? Or can you sell it, but you can't change the model itself?I don't get the gist of what this licence says at all....
The Non-Commercial use refers to the model itself. You get royalty free rights over any image produced from it and can change and edit the model for personal use. You cannot however change/modify the model and then try and sell that variation. TLDR: Anything it generates is free game but cant resell any version of this model with this license.
Make ultra raw model available for download, please :C
good
This model has no clip or text encoder?... So you need to use another checkpoint that has a text encoder with this checkpoint?..
clip_l.safetensor
t5-v1_1-xxl-encoder-Q8_0.gguf
ae.safetensor
3 minutes of research :)
I thank you for your comment/guidance, i'm new to this. I found and tried to load this checkpoint via a UNet only loader and merge it with a Clip/Vae checkpoint, but could not get that working. I think what you're saying is to download the three files and load them into the Clip and Vae folders of the installation? I will try that.
The model already has clip_l and t5xxl inside it. I GGUF-ified the (pruned) safetensors with sd.cpp and analyzed the produced GGUF. The GGUF has clip_l and t5xxl layers inside it, therefore the original (pruned) safetensors should also have these!
EDIT: The full safetensors, however, seem to lack the clip_l and t5xxl when I GGUF it. Either it fell off when I GGUFing it or it didn't exist in the safetensors to begin with.
not working ;(
NEW PC Build for Stable Diffusion and Flux Model Use – Seeking Advice
https://www.reddit.com/r/ryzen/comments/1kdu9jh/new_pc_build_for_stable_diffusion_and_flux_model/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
ที่สุด
Hey guys. When I try to run the model, the K-Sampler stops at 40% and then nothing.
dpmmp_2m and sgm uniform is used. 20 setps.
I use a 4090. thank you for tips!
Use Euler for the sample, simple for the scheduler.
Hi, when I try to copy images from Civitai they don't generate the same in my Forge. Why?
There are many variables at work and if generated in a different manner such as comfy, the best you can get is an approximation. Your best bet is to use img2img with 0,5 or lower denoise so the output image is very similar to the input image.
Different GPUs generate different noises even with the same seed number. This frustrated me for a long time until I understood why
Amazing, I like the high quality results
I think the same! you can create so many awesome things with this model
The best model so far
So true
всем привет ) я новичек в этой темке помогите плз. для этого чекпоинта какие необходимы vae и тд и нужен ли unet дляfp32
It's not working in Forge. Why is that?
me too 16 gb vram
working perfectly in forge with 3090ti
using 16 gb model with 4080 super on forge, no problem
Likely potato related
does not work in comfy ui? img2img
zzz
not working unless it need 24 or more vram . me =16 vram 32 ram , forge
Probably a forge thing. I'm using a RTX 3080 with 10gb vram and model works fine in ComfyUI, though much slower than SDXL
I can run the Pruned Model fp32 (15.91 GB) on a 12GB RTX3060 card in ForgeUI. It offloads VRAM to RAM when needed.
What is the exact error you are getting? Did you manually adjust "GPU Weights (MB)" in the UI? If you maxed it out, try to reduce it.
Loading the said checkpoint together with ae.safetensors, clip_l.safetensors and t5xxl_fp16.safetensors works fine in my case.
@Il_ya Wow amazing, I got 16Vram and 32Ram. Is it okay if I ask for a workflow? Or where can I get one specific for this one? thank you!
It works in Forge, and it works great. Even on my 8Gb RTX 3070.
@Il_ya im downloading to try again / this time with exceptions/ flux is best for modeling and model agencies, but dont remember why this one model didn't work, i can't find it in Huggingface! Unfortunately, i can't post my work here because they all have minor looks to them so / i get idea from ai than try to take pictures or decor from that one pic we like, mostly lots of skin show small pieces of outfits like male boxer , speedo,or sleeveless shirts or female swimwear.{ its funny to work with real model, but can't post ai }
👌
Might just be me, but I don't think the web version CivitAI "Raw mode" version is working anymore.
the best
Very good, sometimes makes some glitchy work but also in a beautiful way
Very cool generator!
nice!
Just tried FLUX and I'm honestly impressed. It's like having creativity and control finally working together — intuitive, powerful, and actually fun to use. If you're not using FLUX yet, you're missing out on serious workflow magic.
Nice.
nice
This model Flux is useless for anyone doing real work.
You publish it publicly but forbid any commercial use? That’s not open-source, that’s bait-and-switch.
Either release it freely or don’t release it at all.
Releasing powerful with non-commercial licenses just creates confusion, wastes people’s time, and blocks independent creators from doing anything meaningful with it.
Stop hijacking the open-source ecosystem with locked-down marketing stunts.
Models like this don’t belong on platforms that claim to support open research. They belong behind a paywall, where they obviously want to be.
Yea, I agree. Idk how they got the images to train on, but they'd effectively rely on material traditionally deemed "copyrighted". Not that property over ideas exist, as this clearly shows two people can use the same idea at the same time. Unlike how an artist would use his paint brush, a different artist can't use the brush at the same time as the other.
I can understand an agreement to intellectual restriction contract, but it needs to be upfront than it being off to the side and not knowing what I agreed to. Even then, the contract can't be built on what is equivalent to a promise. It would be unreasonable to enforce a contract that is just "promise me you won't use this model for profit".
while i agree with you--they didnt have to release the weights at all so i'll give them credit for that, at least. all the other big players making state of the art models on par with flux dont even do that.
I downloaded the pruned fp32 version from this site and the fp8 version from Hugging Face to compare both models, but when I use the same parameters in Forge, the generated images are exactly the same. Does anyone have an explanation? Also, does this mean that training a LORA on the pruned fp32 model will not be better than training on the fp8 model?
I can confirm the same results in ComfyUI as well.
Decent model













