Please check out the Quickstart Guide to Flux for all the info you need to get started!
FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.
Key Features
Cutting-edge output quality, second only to our state-of-the-art model
FLUX.1 [pro].Competitive prompt following, matching the performance of closed source alternatives .
Trained using guidance distillation, making
FLUX.1 [dev]more efficient.Open weights to drive new scientific research, and empower artists to develop innovative workflows.
Generated outputs can be used for personal, scientific, and commercial purposes as described in the flux-1-dev-non-commercial-license.
Usage
We provide a reference implementation of FLUX.1 [dev], as well as sampling code, in a dedicated github repository. Developers and creatives looking to build on top of FLUX.1 [dev] are encouraged to use this as a starting point.
Learn More Here:
https://huggingface.co/black-forest-labs/FLUX.1-dev
Description
FAQ
Comments (126)
Can it be used with Automatic 1111?
Not yet. From what the big boys that know more than I have explained tho, it is very much able to be implemented for it with some tweaks to A1111/ReForge/Forge
Incredible!
Looks awesome hopefully cam use it on civit someday
Wow. The accuracy to the prompt is amazing. 22gb base too, how much vram required to run this?
it's possible on 8GB but you'll need to be patient and you may need lots of regular RAM. I have a laptop 8GB VRAM and 64GB RAM and one image (after the model is loaded) takes from 1.5 - 3 minutes depending on sampler and steps.
I don't know. I'm still rendering.
@eldritchadam I have a 2070 super with 8GB vram and 32GB ram one image is generated in 45 minutes, 20 steps, fp16
@troggy I've read that one of the "features" of Flux is that it can generate images in 4 steps or less. Is there a reason you chose 20 steps? Have you tried less?
@troggy use schnell version - 4 steps
@troggy something seems amiss with that scenario. It should do better than that! Your system should have comparable results to my own.
Is it true that 8GB VRAM won't work?
it works on 8GB, but 1 image takes 45min
@troggy no, 1 image takes 10 min even in 6gb
@troggy 45min isn't bad as long as I can get some sick images out of it lol Hopefully we can use it on Forge soon
@SuperSmuser how much RAM do you have? I have 32GB
@troggy 16GB
@Lewd_N_Geeky what are the specifications of your computer?
@troggy NVIDIA Geforce GTX 1080, 16GB RAM, Intel i7-7700K 4.20GHz
with 64gb ram it takes about a min to generate with 8gb
@tingtingin Yeah I've been waiting to upgrade my ram but I don't think my mobo supports 64gb
It isnt true. It Work.
My System Specs are brutally low and it works.
CPU : Ryzen 2600X
GPU RTX 2070 8 GB VRAM
RAM : 16 GB @ 3000mhz
The Model Load is pain in the Ass and the First Image needs round about 250 ~ 280 sec with 24 - 30 s/it.
After the First Pic im Arround 120 Sec per Picture with 23 - 25 s/it.
So it works with 8 GB Vram
- i use the "schnell" Model btw
RTX 3060 TI 8gb, Core i7 12700F, 64gb RAM it's about one minute, +/- 10 secs, to generate 1024x1024 with schnell. Updating the portable ComfyUI screwed something up, but works out of the box without updating.
@DowhigaWocoDigitalArtwork That's honestly how long it takes for me anyway on Forge, especially with Model Load. I notice it takes me about 5-10 minutes of wait time for any SDXL models and less time for Pony. I know it's my ram bogging out on me. Load time and how long the image generates doesn't bother me as long as I can actually use it lol
I use RTX3050 Labtop GPU 4Gb. Vram and 32 Gb. sys Ram. I used fp8 instead of fp 16. The first time loading the model is quite long. After that, it doesn't take long to generate at 1024x1024 px.
All the big boys with big computers here, while I sit and cry with 4GB of VRAM (T_T)
u can use onsite generation on huggingface :)
https://huggingface.co/black-forest-labs/FLUX.1-schnell
@AIDigitalMediaAgency I didn't even know huggingface had an onsite generator :o You learn something new everyday! Thanks for the tip :D
I feel your pain. I went from a 4GB card to a 4090 last year. The difference was stunning. Save up and do it; WORTH IT!
@clevnumb I most certainly will. But it's gonna be a while. Maybe 50xx series will be out by then
those with 12Gb VRAM - go here 👇
1- https://huggingface.co/Kijai/flux-fp8/tree/main
2- https://huggingface.co/black-forest-labs/FLUX.1-dev
3- https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main
those with 4Gb VRAM - go here 👇
https://huggingface.co/spaces/black-forest-labs/FLUX.1-schnell
I have 8Gb of VRAM and I want it to be used with comfyUI, where can I get a tutorial for that? the 4GB link is for huggingface but I want to use it locally
@krigeta locally - upgrade VRAM to 16, as you can see - 12 is already not enough ...
Can confirm this works very well, been using for about 32 hours. a 768x1344 image takes about 1-2 mins with the dev model. I do suggest upscaling still however tho at like a 0.18 denoise using a very high quality sdxl model, like something from Zavy using Ultimate Upscale. The upscale is especially suggested if using the schnell model.
Gpu: 3060 12gb
@TheP3NGU1N dev model, 24 steps, Euler (or even better: dpmpp_2m) / SgmUniform - printer for perfect images!
Probably a stupid question, but the text encoders are a separate model, right? As in, if I wanted a text heavy image, I would use t5xxl_fp16.safetensors rather than flux1-dev-fp8.safetensors.
@OliviaRossi okay
@Jimmy360 text encoders are separate
I'd just like to report back that generating a single 1024x1400 base image with 12 steps and Uni PC sampler takes approximately 55 seconds (50s base + 5s VAE decoding) on RTX4080S. Peak VRAM usage I've seen is 17.09 GiB. Peak RAM usage is around 42 GiB. 20 steps with Euler sampler take ~1.5 minutes with same resolution but the details are a bit better.
It already destroys SD 1.5 and SDXL and SD3. Just needs trained variations, Loras, to fix the naughty bits problems it tends to have....prompt accuracy is magical in comparison to SD. I can't wait!
It doesn't yet, maybe in version 2.
Actually, you can't say it destroy SD1.5 when it needs a moon rocket to work, when 1.5 is happy with my tractor and can even drink a beer with me.
Finally we see something exciting.
Just published 2 basic workflow with civit friendly metadata :
- Schnell : https://civitai.com/models/619982/flux-schnell-4-steps-img2img-friendly-metadata-image-saver
- Dev : https://civitai.com/models/620149/flux1-dev-l10n-flow-img2img-friendly-metadata-image-saver
Note : i cant find the right name for Civit to recognize the model on a "post image" from the menu you still need to post it on the menu page.
If you find the right model name please share !
Enjoy ;)
This is truly a great model. I'm running Automatic1111 on CPU wtih 32GB RAM and this model is performing excellently. It's able to fulfill all my prompts without extra Loras or embeddings so far. I'm loving it.
how did you get it to run on auto1111?
Automatic1111... I'll need to see proof of that, lol.
@headupdef judging by the quality of the image they uploaded as Flux, they didn't. Probably uploaded a sdxl image... but I'll hope I'm wrong. If they prove it so because that would be great news.
@headupdef Well, good news. It caused me to have to completely reinstall Automatic1111. Not sure what happened, but it happened. I DID find on HuggingFace a "schnell" version that is only 11GB and it hasn't broken my installation yet! I just put the file in the usual spot and it's working. Not sure why the other one broke me.
@TheP3NGU1N Well, good news. It completely broke A1111. Had to reinstall it from scratch to get it to work again. Lots of file corruption. I did find a "lighter" version that's 11G and it's not breaking me yet, but it's fussy has hell. For a short while I was happy *cries*
mhm..
Can someone explain how to install this model on forgeui/automatic1111? plss
not yet - only comfyui and swarmui for now
Give it a few days/weeks. They'll get there. From what has been said and read there isn't any reason why it won't happen but as all good things ai, comfyui gets it first ;)
@OliviaRossi Is there anything special that one has to do inside of COMFYUI to use this model, or can you load it like a regular SDXL model?
@pqnisher https://www.youtube.com/watch?v=tXO6SJ-6Eb8 all is there (2nd half of the video)
@OliviaRossi Found it thanks! Very Useful Video with all the links. I just finished my first test image, running a 64GB RAM, Nvidia 4070 RTX GPU - 8 GB VRAM - 4:30s runtime at 13.4s/it - Took a min, but I'm excited to play around with it. I appreciate the response. Cheers.
@pqnisher good luck
@pqnisher There is a special workflow to use, but I tried it on a 3060 12g, and it didn't work for me.
I'm completely blown away by the quality of images from this model. If LoRa training was possible on this I could easily see this overtaking SDXL.
It has a seriously bad case of "Dreamshaper Girl Face" IMO, SD3 Medium is quite frankly a lot better at stuff like "hard realistic" portraits of people and such.
what a pity. because of the 12 billion parameters. no Lora will be generate for this model
Is it just me, or is FLUX censored below the waist?
Yep, very much so. Breasts are possible, tho you will have to fight it on it a little sometimes. Butt shots are possible too.. but crotch, thus far, haven't seen it.
Why do people think that any for-profit company is ever going to release a foundational model that they intentionally went out of their way to train on well-captioned NSFW? It's never going to happen, it's not censorship, it's just something they (for obvious reasons) didn't intentionally put much effort into.
From what I read on Reddit, they are not planning on changing it either. I hate censorship so damn much.
@Lewd_N_Geeky From a business standpoint it is logical to block it. Get your name in the news or such because someone decided to use your model to make CP or whatever, bam, your company is in hot water. Now if someone else comes along, makes a finetune or whatever, then the blame is on them. Not the company.
@TheP3NGU1N Oh I know. Well someone will eventually come along and make it so NSFW art can be made. It always happens lol
@diffusionfanatic1173 Eh some of it is mild censorship, because it is missing obvious tags that are not always lewd (like saliva) and seems to struggle a bit. Though I agree with the basic point that by default it is not going to be great at nsfw unless they do the unprecedented of well-tagging porn at the start.
>Why do people think that any for-profit company is ever going to release a foundational model that they intentionally went out of their way to train on well-captioned NSFW?
people seem to forget just how wild and out of pocket OpenAI's DALLE-3 model was when it was picked up by Microsoft late last year. you could straight up prompt for fellatio and intercourse with full penetration detail, as long as you were creative with your prompt structure and how you "dressed" the scene with distractions to fool the image filter. even right now, Chinese users can go to Cici AI and get even more obscene content, because while there's a baseline censorship filter, that DALL3 implementation cares more about copyright infringement than nudity or intercourse. the foundational model is there, and it is stuffed to the gills with pornographic content; it's just locked behind ClosedAI.
@diffusionfanatic1173 Well, you can generate NSFW images thought API. But then you have to pay for each image as well. So I believe you might be a bit naive if you think they've trained it on SFW images and that's why we can't create boobs.
It's a simple matter of making some extra cash from those who really want to create NSFW images.
https://fal.ai/models/fal-ai/flux/api
access to the license linked is blocked with a 403 error: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/licence.md
This one seems to work: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
Apache 2.0
what are system requirnment to use this model?
a lot, like 12GB VRAM and 32GB system ram at least. It's 4 billion parameters larger than even the API-only SD3 Ultra.
People have been able generate images with min 6GB VRAM (at least thats what I gather from the comments here) Only works with ComfyUI for now
Let's just say, I'm using my usual Nvidia GTX 1650 with 4GB. I have 16 GB Ram, and I normally use a 15 GB virtual memory on my HD.
At 768 x 768 I can finally load this model on my 2 Solid State drives, running about in total 56 GB of hard drive space for a virtual memory.
It just works.
Who is the user who uploaded this model? There is no link to the user.
Furthermore, the link below also distributes a model with the same hash, but which one is the real one?
https://civitai.com/models/617609/flux1-dev?modelVersionId=690425
get it here: https://huggingface.co/black-forest-labs
Download this one. Maxfield, the owner of the website, uploaded it. So you know it's the correct one.
Otherwise all related links to Hugging Face, where it originally was uploaded by the creators, is in the description as you will also need other components, such as 2 clip models and a vae.
This was posted by CivitAI.
plsssssss a flux Anime cartoon checkpoint , hope ppl make Lora flux. flux pro + pony diffusion v7 or more = opening pandora box on steroids--https://www.youtube.com/watch?v=77zAWTmDmiU
12 billion parameters... that is a respectable sized neural network. For a painting AI... unheard of at least for a publicly available model. LLMs get really smart at 13b. Interesting.
yeah, just imagine what they will be able to do at 100B . Maybe we wont need LLM's at that point since the text will be able to be generated in-image?
When will there be an on-site generation option available?
Most likely never, as it doesn't have a commercial license.
I honestlz don't think so since the requirements to work correctly are so high...
@denrakeiw Pro version has options for enterprise level inquiries for its usage. What that gets you is a shrug but I'd image, if not now, it will eventually be for use. They need to make money back on it somehow.
@TheP3NGU1N I've already made a few images with the Pro version, and I don't find them significantly better than the free version. But at least the images from the Pro version can be used commercially.
Will this be trainable by the community? Using something like Kohya_ss GUI, or OneTrainer?
It is! SimpleTuner supports loras now and I have already seen a finetune
Current training minimum requirements are 40-80GB. This will likely improve, but as of day 4, that's where we are.
While I have not yet tested it, apparently training across multiple GPUs is not only supported, but may be preferred for FLUX training.
@JaneB the 'experts' say that wasn't a true finetune, but a modified clip. The model itself was unchanged.
The hands are good and the text is excellent, but, for the average user, until that size comes down, it's not really viable. I hope they get enough commercial interest to fuel development. I consider myself lucky to have a dedicated 3060 with 12GB VRAM in a system with 32GB RAM, at 20sec/it it's not doing anything the others can't do in the same time frame, even the fp8 version only shaved a second off it. It's got potential but until it can be more widely adopted, it'll probably sit on the community shelf like Cascade.
Great model indeed but not finished yet. Especially reality needs some work as do textures, like skin textures. See my blog
Is it just me or are a majority of the of the women's faces have a cleft chin and plump lips which are slightly parted. Don't get me wrong, FLUX is great, but it's kind of difficult to get away from that particular face.
the chins is something many people have pointed out. tho if you give a description of the face, it can easily fixed most of the time.
Thanks, now I can't unsee the chins lol.
Prompt it away. Not too tricky.
Cleft chins are my absolute nemesis... and there are so many models with that issue... 😪
the experiments I have undertaken to get rid of them... I really hope it can be prompted away
@redpinkretro it isn't too difficult. Just describe the alternative face you want in natural language.
https://civitai.com/images/22787004
There's an example of one I made. There are many more I've uploaded that do not have the cleft.
At first I was like crap-- another model. Kolors came out, I liked it but not very impressed as current SDXL models are just incredible. Took the plunge and fixed the errors in generations for Flux-- after half a day playing with it -- folks, for me -- its hard to go back to 1.5 and sdxl. Once you had flux, you can't go back..lol.. Jokes aside, this is from the ex Stablity AI crew, it is SD3 reborn and better. THIS FLUX IS ADDICTIVE AF!!! and loving it!
Haven't used this model yet (dunno if it's worth trying with my 6GB 1660) but I like what I'm seeing.
HOWEVER
It does seem that the model suffers from a bad case of same-face syndrome.
As far as I know, you need at least 12g to make it work. I have a 3060 12g . and couldn't make it run with Comfy and it was the low detail model version of Flux. So, no it won't work on a 6g.
You can get it to work on as low of a 4gb system (running on lowvram mode) using the fp8 models (https://huggingface.co/Kijai/flux-fp8/tree/main) but it will be slow, very slow. The Schnell model would be best, as it works well at just 4 steps (saving you time), then you can then just upscale it but either will work.
@TheP3NGU1N may i ask, what you recommend, if any, to run on 8gb 4060 with 32gb system ram? Is it possible? Thank you!
@ddamir247931 with the fp8 models (see link in my other comment) & comfy in lowvram mode, you should be fine I think. You are probably on the minimal side of normal ram however, since the clips run on cpu/ram during the process. That may be the only issue.
Just don't expect things to be fast.
@TheP3NGU1N tnx, i will try, i dont expect speed, but to try something new, i kinda got bored with sd(xl) and pony, to many generic images, and hard to create something that is nost just random itteration of image(s) that i already created....
You can run the fp8 model with 11gb in lowvram mode, but it's hella slow. Like 30-70 minutes slow 6gb would be a nightmare..
Dev fp8 model on 4060 8gb - 20 minutes, 512x512, 20 steps, 63sec/it.
I was expecting slow generation, poor performance, but, in range 4-5 minutes per image, 20 minutes of waiting on mid range gpu - there is no image worth of that waiting.
Unusable, i'll try schnell model, but, not optimistic, this is only for high end xx80 and xx90 cards.
EDIT: Strange, gpu usage is almost constantly below 20%, with just a few peaks to 100%, and usage of system ram is only 15gb. Something is off?
did they added SD support?
No, it's comfy only!
It will come eventually, there is no technical reasons why it can't work the weubuis just need tweaking. Give it a few days to a week or two and I bet a1111/forge/reforge will catch up.
It's Comfy and SwarmUI only currently.
If you like A1111, you'd love SwarmUI, it's comfyui with beginner friendly UI.
I have an error. Yesterday it worked like gold. Today I fire it up and the images generate all grey. No matter what settings I take, it doesn't work. I am not using anything NSFW, prompt short. Has anyone had the same? I guess I'll have to download again....
try refreshing models, brother, it works for me
this model is the model you want to figure out soon. i just fluxed all over the place.
I installed DualCLIPLoader, and set the parameters t5xxl... And Clip_I, but I can't set "Type" to flux, there are only sdxl or sd1. What am I doing wrong? + my comfyUI doesn't see ae.st, even though I placed it in VAE
Use custom node ComfyUI Manager and hit to the Update ALL button (if you do not have it you can do it manually as the comfyUI readme file guides you). This would probably update DualCLIPLoader and as soon as you restart the UI you should see flux choice. Then, change the filename ending to safetensors when it's needed (e.g. ae.safetensors). Make sure you put the models in the UNET folder and not in the checkpoints. This would resolve your issue
Just update comfyui, i had the exact same problem before updating
That was happening to me at first when I first installed. I restarting my computer a few times and that worked for me.
I look at your work and the ladies' faces based on the same scheme and shape are quite boring. Now is this a shortcoming of the model or do you need to type in a bunch of specific text to give it a unique face?
Just describe a face that isn't the default and you'll be fine.
yes, i also noticed same face, unless you are creative about what that person looks.
Had bumped into a weird situation when using img2img, higher resolution with high denoise will get blurry image, anyone?
Details
Files
flux_dev.safetensors
Mirrors
yange_v10.safetensors
flux1Dev_v10.safetensors
flux1-dev.safetensors
flux1Dev_v10.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux_dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux_dev.safetensors
flux1-dev-fp16.safetensors
flux1-dev.safetensors
flux1-dev-fp16.safetensors
flux1-dev-fp16.safetensors
flux1-dev.safetensors
flux1-dev-kontext_fp16.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux_dev.safetensors
flux1-dev.safetensors
flux-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
fl1d.safetensors
flux1-dev-fp16.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
Flux-Dev1_fp32.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux_1Dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux_1Dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flax_dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux.1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev_moj.safetensors
flux1-dev.safetensors
FLUX.1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux_dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
BFL FLUX.1 Dev [BF16].safetensors
flux1-dev.safetensors
flummx_dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev (1).safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev-fp16.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
flux1-dev.safetensors
Available On (4 platforms)
Same model published on other platforms. May have additional downloads or version variants.


















