CivArchive
    Preview 22496037
    Preview 22493595
    Preview 22493594
    Preview 22493592
    Preview 22493593
    Preview 22493596
    Preview 22495490
    Preview 22496816
    Preview 22496936
    Preview 22497149
    Preview 22497357
    Preview 22497397
    Preview 22497587
    Preview 22497589
    Preview 22497609
    Preview 22497639
    Preview 22497682
    Preview 22497995
    Preview 22497656

    Please check out the Quickstart Guide to Flux for all the info you need to get started!

    FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.

    Key Features

    1. Cutting-edge output quality, second only to our state-of-the-art model FLUX.1 [pro].

    2. Competitive prompt following, matching the performance of closed source alternatives .

    3. Trained using guidance distillation, making FLUX.1 [dev] more efficient.

    4. Open weights to drive new scientific research, and empower artists to develop innovative workflows.

    5. Generated outputs can be used for personal, scientific, and commercial purposes as described in the flux-1-dev-non-commercial-license.

    Usage

    We provide a reference implementation of FLUX.1 [dev], as well as sampling code, in a dedicated github repository. Developers and creatives looking to build on top of FLUX.1 [dev] are encouraged to use this as a starting point.

    Learn More Here:
    https://huggingface.co/black-forest-labs/FLUX.1-dev

    Description

    FAQ

    Comments (126)

    53rt5355izAug 2, 2024· 5 reactions
    CivitAI

    Can it be used with Automatic 1111?

    TheP3NGU1NAug 3, 2024· 5 reactions

    Not yet. From what the big boys that know more than I have explained tho, it is very much able to be implemented for it with some tweaks to A1111/ReForge/Forge

    headupdefAug 3, 2024· 1 reaction
    CivitAI

    Incredible!

    2018cfhAug 3, 2024· 7 reactions
    CivitAI

    Looks awesome hopefully cam use it on civit someday

    Madafada1991Aug 3, 2024· 4 reactions
    CivitAI

    Wow. The accuracy to the prompt is amazing. 22gb base too, how much vram required to run this?

    eldritchadamAug 3, 2024

    it's possible on 8GB but you'll need to be patient and you may need lots of regular RAM. I have a laptop 8GB VRAM and 64GB RAM and one image (after the model is loaded) takes from 1.5 - 3 minutes depending on sampler and steps.

    vongregorAug 3, 2024· 5 reactions

    I don't know. I'm still rendering.

    troggyAug 3, 2024· 3 reactions

    @eldritchadam I have a 2070 super with 8GB vram and 32GB ram one image is generated in 45 minutes, 20 steps, fp16

    seanb2Aug 3, 2024

    @troggy I've read that one of the "features" of Flux is that it can generate images in 4 steps or less. Is there a reason you chose 20 steps? Have you tried less?

    jrwtf88Aug 3, 2024

    @troggy use schnell version - 4 steps

    eldritchadamAug 3, 2024· 1 reaction

    @troggy something seems amiss with that scenario. It should do better than that! Your system should have comparable results to my own.

    Lewd_N_GeekyAug 3, 2024
    CivitAI

    Is it true that 8GB VRAM won't work?

    troggyAug 3, 2024· 2 reactions

    it works on 8GB, but 1 image takes 45min

    SuperSmuserAug 3, 2024· 1 reaction

    @troggy no, 1 image takes 10 min even in 6gb

    Lewd_N_GeekyAug 3, 2024

    @troggy 45min isn't bad as long as I can get some sick images out of it lol Hopefully we can use it on Forge soon

    troggyAug 3, 2024

    @SuperSmuser how much RAM do you have? I have 32GB

    Lewd_N_GeekyAug 3, 2024

    @troggy 16GB

    troggyAug 3, 2024

    @Lewd_N_Geeky what are the specifications of your computer?

    Lewd_N_GeekyAug 3, 2024

    @troggy NVIDIA Geforce GTX 1080, 16GB RAM, Intel i7-7700K 4.20GHz

    tingtinginAug 3, 2024· 1 reaction

    with 64gb ram it takes about a min to generate with 8gb

    Lewd_N_GeekyAug 3, 2024

    @tingtingin Yeah I've been waiting to upgrade my ram but I don't think my mobo supports 64gb

    DowhigaWocoAug 3, 2024· 1 reaction

    It isnt true. It Work.
    My System Specs are brutally low and it works.

    CPU : Ryzen 2600X
    GPU RTX 2070 8 GB VRAM
    RAM : 16 GB @ 3000mhz

    The Model Load is pain in the Ass and the First Image needs round about 250 ~ 280 sec with 24 - 30 s/it.

    After the First Pic im Arround 120 Sec per Picture with 23 - 25 s/it.

    So it works with 8 GB Vram

    - i use the "schnell" Model btw

    ureken125Aug 3, 2024· 1 reaction

    RTX 3060 TI 8gb, Core i7 12700F, 64gb RAM it's about one minute, +/- 10 secs, to generate 1024x1024 with schnell. Updating the portable ComfyUI screwed something up, but works out of the box without updating.

    Lewd_N_GeekyAug 3, 2024

    @DowhigaWocoDigitalArtwork That's honestly how long it takes for me anyway on Forge, especially with Model Load. I notice it takes me about 5-10 minutes of wait time for any SDXL models and less time for Pony. I know it's my ram bogging out on me. Load time and how long the image generates doesn't bother me as long as I can actually use it lol

    iamaeaAug 4, 2024

    I use RTX3050 Labtop GPU 4Gb. Vram and 32 Gb. sys Ram. I used fp8 instead of fp 16. The first time loading the model is quite long. After that, it doesn't take long to generate at 1024x1024 px.

    PosfalAug 3, 2024· 9 reactions
    CivitAI

    All the big boys with big computers here, while I sit and cry with 4GB of VRAM (T_T)

    AIDigitalMediaAgencyAug 3, 2024· 1 reaction

    u can use onsite generation on huggingface :)

    https://huggingface.co/black-forest-labs/FLUX.1-schnell

    PosfalAug 3, 2024· 3 reactions

    @AIDigitalMediaAgency I didn't even know huggingface had an onsite generator :o You learn something new everyday! Thanks for the tip :D

    clevnumbAug 3, 2024· 1 reaction

    I feel your pain. I went from a 4GB card to a 4090 last year. The difference was stunning. Save up and do it; WORTH IT!

    PosfalAug 4, 2024· 1 reaction

    @clevnumb I most certainly will. But it's gonna be a while. Maybe 50xx series will be out by then

    0l1v1aR0551Aug 3, 2024· 14 reactions
    CivitAI
    krigetaAug 3, 2024

    I have 8Gb of VRAM and I want it to be used with comfyUI, where can I get a tutorial for that? the 4GB link is for huggingface but I want to use it locally

    0l1v1aR0551Aug 3, 2024

    @krigeta locally - upgrade VRAM to 16, as you can see - 12 is already not enough ...

    TheP3NGU1NAug 3, 2024

    Can confirm this works very well, been using for about 32 hours. a 768x1344 image takes about 1-2 mins with the dev model. I do suggest upscaling still however tho at like a 0.18 denoise using a very high quality sdxl model, like something from Zavy using Ultimate Upscale. The upscale is especially suggested if using the schnell model.

    Gpu: 3060 12gb

    0l1v1aR0551Aug 3, 2024

    @TheP3NGU1N dev model, 24 steps, Euler (or even better: dpmpp_2m) / SgmUniform - printer for perfect images!

    Jimmy360Aug 3, 2024

    Probably a stupid question, but the text encoders are a separate model, right? As in, if I wanted a text heavy image, I would use t5xxl_fp16.safetensors rather than flux1-dev-fp8.safetensors.

    krigetaAug 4, 2024

    @OliviaRossi okay

    0l1v1aR0551Aug 4, 2024· 1 reaction

    @Jimmy360 text encoders are separate

    oaf40Aug 3, 2024· 2 reactions
    CivitAI

    I'd just like to report back that generating a single 1024x1400 base image with 12 steps and Uni PC sampler takes approximately 55 seconds (50s base + 5s VAE decoding) on RTX4080S. Peak VRAM usage I've seen is 17.09 GiB. Peak RAM usage is around 42 GiB. 20 steps with Euler sampler take ~1.5 minutes with same resolution but the details are a bit better.

    clevnumbAug 3, 2024· 9 reactions
    CivitAI

    It already destroys SD 1.5 and SDXL and SD3. Just needs trained variations, Loras, to fix the naughty bits problems it tends to have....prompt accuracy is magical in comparison to SD. I can't wait!

    roelfrenkemaAug 4, 2024· 1 reaction

    It doesn't yet, maybe in version 2.

    ChocolemurAug 6, 2024

    Actually, you can't say it destroy SD1.5 when it needs a moon rocket to work, when 1.5 is happy with my tractor and can even drink a beer with me.

    KINGOALAug 3, 2024· 11 reactions
    CivitAI

    Finally we see something exciting.

    L10n_H34r7Aug 3, 2024· 5 reactions
    CivitAI

    Just published 2 basic workflow with civit friendly metadata :

    - Schnell : https://civitai.com/models/619982/flux-schnell-4-steps-img2img-friendly-metadata-image-saver

    - Dev : https://civitai.com/models/620149/flux1-dev-l10n-flow-img2img-friendly-metadata-image-saver

    Note : i cant find the right name for Civit to recognize the model on a "post image" from the menu you still need to post it on the menu page.

    If you find the right model name please share !

    Enjoy ;)

    macrossramenAug 3, 2024· 6 reactions
    CivitAI

    This is truly a great model. I'm running Automatic1111 on CPU wtih 32GB RAM and this model is performing excellently. It's able to fulfill all my prompts without extra Loras or embeddings so far. I'm loving it.

    headupdefAug 3, 2024

    how did you get it to run on auto1111?

    TheP3NGU1NAug 3, 2024

    Automatic1111... I'll need to see proof of that, lol.

    TheP3NGU1NAug 3, 2024

    @headupdef judging by the quality of the image they uploaded as Flux, they didn't. Probably uploaded a sdxl image... but I'll hope I'm wrong. If they prove it so because that would be great news.

    macrossramenAug 4, 2024

    @headupdef Well, good news. It caused me to have to completely reinstall Automatic1111. Not sure what happened, but it happened. I DID find on HuggingFace a "schnell" version that is only 11GB and it hasn't broken my installation yet! I just put the file in the usual spot and it's working. Not sure why the other one broke me.

    macrossramenAug 4, 2024· 1 reaction

    @TheP3NGU1N Well, good news. It completely broke A1111. Had to reinstall it from scratch to get it to work again. Lots of file corruption. I did find a "lighter" version that's 11G and it's not breaking me yet, but it's fussy has hell. For a short while I was happy *cries*

    TheP3NGU1NAug 4, 2024

    mhm..

    EmperorMagnusAug 3, 2024
    CivitAI

    Can someone explain how to install this model on forgeui/automatic1111? plss

    0l1v1aR0551Aug 3, 2024· 1 reaction

    not yet - only comfyui and swarmui for now

    TheP3NGU1NAug 3, 2024· 1 reaction

    Give it a few days/weeks. They'll get there. From what has been said and read there isn't any reason why it won't happen but as all good things ai, comfyui gets it first ;)

    pqnisherAug 3, 2024

    @OliviaRossi Is there anything special that one has to do inside of COMFYUI to use this model, or can you load it like a regular SDXL model?

    0l1v1aR0551Aug 3, 2024· 1 reaction

    @pqnisher https://www.youtube.com/watch?v=tXO6SJ-6Eb8 all is there (2nd half of the video)

    pqnisherAug 3, 2024

    @OliviaRossi Found it thanks! Very Useful Video with all the links. I just finished my first test image, running a 64GB RAM, Nvidia 4070 RTX GPU - 8 GB VRAM - 4:30s runtime at 13.4s/it - Took a min, but I'm excited to play around with it. I appreciate the response. Cheers.

    0l1v1aR0551Aug 4, 2024

    @pqnisher good luck

    GRM80Aug 5, 2024

    @pqnisher There is a special workflow to use, but I tried it on a 3060 12g, and it didn't work for me.

    1492835Aug 3, 2024· 1 reaction
    CivitAI

    I'm completely blown away by the quality of images from this model. If LoRa training was possible on this I could easily see this overtaking SDXL.

    ZootAllures9111Aug 3, 2024· 1 reaction

    It has a seriously bad case of "Dreamshaper Girl Face" IMO, SD3 Medium is quite frankly a lot better at stuff like "hard realistic" portraits of people and such.

    sunshineggglAug 7, 2024

    what a pity. because of the 12 billion parameters. no Lora will be generate for this model

    SPoletaevAug 3, 2024
    CivitAI

    Is it just me, or is FLUX censored below the waist?

    TheP3NGU1NAug 3, 2024

    Yep, very much so. Breasts are possible, tho you will have to fight it on it a little sometimes. Butt shots are possible too.. but crotch, thus far, haven't seen it.

    ZootAllures9111Aug 3, 2024· 4 reactions

    Why do people think that any for-profit company is ever going to release a foundational model that they intentionally went out of their way to train on well-captioned NSFW? It's never going to happen, it's not censorship, it's just something they (for obvious reasons) didn't intentionally put much effort into.

    Lewd_N_GeekyAug 3, 2024

    From what I read on Reddit, they are not planning on changing it either. I hate censorship so damn much.

    TheP3NGU1NAug 3, 2024

    @Lewd_N_Geeky From a business standpoint it is logical to block it. Get your name in the news or such because someone decided to use your model to make CP or whatever, bam, your company is in hot water. Now if someone else comes along, makes a finetune or whatever, then the blame is on them. Not the company.

    Lewd_N_GeekyAug 3, 2024

    @TheP3NGU1N Oh I know. Well someone will eventually come along and make it so NSFW art can be made. It always happens lol

    theominAug 4, 2024

    @diffusionfanatic1173 Eh some of it is mild censorship, because it is missing obvious tags that are not always lewd (like saliva) and seems to struggle a bit. Though I agree with the basic point that by default it is not going to be great at nsfw unless they do the unprecedented of well-tagging porn at the start.

    halcyonpurgeAug 4, 2024

    >Why do people think that any for-profit company is ever going to release a foundational model that they intentionally went out of their way to train on well-captioned NSFW?

    people seem to forget just how wild and out of pocket OpenAI's DALLE-3 model was when it was picked up by Microsoft late last year. you could straight up prompt for fellatio and intercourse with full penetration detail, as long as you were creative with your prompt structure and how you "dressed" the scene with distractions to fool the image filter. even right now, Chinese users can go to Cici AI and get even more obscene content, because while there's a baseline censorship filter, that DALL3 implementation cares more about copyright infringement than nudity or intercourse. the foundational model is there, and it is stuffed to the gills with pornographic content; it's just locked behind ClosedAI.

    CreepybitAug 4, 2024

    @diffusionfanatic1173 Well, you can generate NSFW images thought API. But then you have to pay for each image as well. So I believe you might be a bit naive if you think they've trained it on SFW images and that's why we can't create boobs.

    It's a simple matter of making some extra cash from those who really want to create NSFW images.

    https://fal.ai/models/fal-ai/flux/api

    lotophage01134Aug 3, 2024· 3 reactions
    CivitAI
    roelfrenkemaAug 4, 2024

    Apache 2.0

    Dex96Aug 3, 2024
    CivitAI

    what are system requirnment to use this model?

    ZootAllures9111Aug 3, 2024· 1 reaction

    a lot, like 12GB VRAM and 32GB system ram at least. It's 4 billion parameters larger than even the API-only SD3 Ultra.

    PosfalAug 4, 2024· 1 reaction

    People have been able generate images with min 6GB VRAM (at least thats what I gather from the comments here) Only works with ComfyUI for now

    almarkAug 4, 2024

    Let's just say, I'm using my usual Nvidia GTX 1650 with 4GB. I have 16 GB Ram, and I normally use a 15 GB virtual memory on my HD.
    At 768 x 768 I can finally load this model on my 2 Solid State drives, running about in total 56 GB of hard drive space for a virtual memory.
    It just works.

    fantaseedAug 3, 2024· 5 reactions
    CivitAI

    Who is the user who uploaded this model? There is no link to the user.

    Furthermore, the link below also distributes a model with the same hash, but which one is the real one?

    https://civitai.com/models/617609/flux1-dev?modelVersionId=690425

    Neue_EpistemeAug 4, 2024· 1 reaction
    TheP3NGU1NAug 4, 2024· 4 reactions

    Download this one. Maxfield, the owner of the website, uploaded it. So you know it's the correct one.

    Otherwise all related links to Hugging Face, where it originally was uploaded by the creators, is in the description as you will also need other components, such as 2 clip models and a vae.

    noname3332Aug 4, 2024· 4 reactions

    This was posted by CivitAI.

    TIK7778899Aug 3, 2024· 2 reactions
    CivitAI

    plsssssss a flux Anime cartoon checkpoint , hope ppl make Lora flux. flux pro + pony diffusion v7 or more = opening pandora box on steroids--https://www.youtube.com/watch?v=77zAWTmDmiU

    androsynth7610Aug 4, 2024· 9 reactions
    CivitAI

    12 billion parameters... that is a respectable sized neural network. For a painting AI... unheard of at least for a publicly available model. LLMs get really smart at 13b. Interesting.

    cathylevermanAug 7, 2024

    yeah, just imagine what they will be able to do at 100B . Maybe we wont need LLM's at that point since the text will be able to be generated in-image?

    crystalkalemAug 4, 2024
    CivitAI

    When will there be an on-site generation option available?

    denrakeiwAug 4, 2024· 3 reactions

    Most likely never, as it doesn't have a commercial license.

    TheDarkLurkerAug 4, 2024

    I honestlz don't think so since the requirements to work correctly are so high...

    TheP3NGU1NAug 4, 2024

    @denrakeiw Pro version has options for enterprise level inquiries for its usage. What that gets you is a shrug but I'd image, if not now, it will eventually be for use. They need to make money back on it somehow.

    denrakeiwAug 4, 2024· 2 reactions

    @TheP3NGU1N I've already made a few images with the Pro version, and I don't find them significantly better than the free version. But at least the images from the Pro version can be used commercially.

    Weatherby43Aug 4, 2024
    CivitAI

    Will this be trainable by the community? Using something like Kohya_ss GUI, or OneTrainer?

    JaneBAug 4, 2024· 2 reactions

    It is! SimpleTuner supports loras now and I have already seen a finetune

    WithoutOrdinaryAug 4, 2024

    Current training minimum requirements are 40-80GB. This will likely improve, but as of day 4, that's where we are.

    While I have not yet tested it, apparently training across multiple GPUs is not only supported, but may be preferred for FLUX training.

    TheP3NGU1NAug 4, 2024

    @JaneB the 'experts' say that wasn't a true finetune, but a modified clip. The model itself was unchanged.

    TheSauceOfAllAug 4, 2024· 3 reactions
    CivitAI

    The hands are good and the text is excellent, but, for the average user, until that size comes down, it's not really viable. I hope they get enough commercial interest to fuel development. I consider myself lucky to have a dedicated 3060 with 12GB VRAM in a system with 32GB RAM, at 20sec/it it's not doing anything the others can't do in the same time frame, even the fp8 version only shaved a second off it. It's got potential but until it can be more widely adopted, it'll probably sit on the community shelf like Cascade.

    roelfrenkemaAug 4, 2024· 1 reaction
    CivitAI

    Great model indeed but not finished yet. Especially reality needs some work as do textures, like skin textures. See my blog

    pope_phredAug 4, 2024· 13 reactions
    CivitAI

    Is it just me or are a majority of the of the women's faces have a cleft chin and plump lips which are slightly parted. Don't get me wrong, FLUX is great, but it's kind of difficult to get away from that particular face.

    TheP3NGU1NAug 4, 2024

    the chins is something many people have pointed out. tho if you give a description of the face, it can easily fixed most of the time.

    1492835Aug 4, 2024· 2 reactions

    Thanks, now I can't unsee the chins lol.

    3041519Aug 5, 2024

    Prompt it away. Not too tricky.

    RedPinkRetroAug 5, 2024

    Cleft chins are my absolute nemesis... and there are so many models with that issue... 😪
    the experiments I have undertaken to get rid of them... I really hope it can be prompted away

    3041519Aug 5, 2024· 1 reaction

    @redpinkretro it isn't too difficult. Just describe the alternative face you want in natural language.

    https://civitai.com/images/22787004

    There's an example of one I made. There are many more I've uploaded that do not have the cleft.

    alexgevo781Aug 5, 2024· 8 reactions
    CivitAI

    At first I was like crap-- another model. Kolors came out, I liked it but not very impressed as current SDXL models are just incredible. Took the plunge and fixed the errors in generations for Flux-- after half a day playing with it -- folks, for me -- its hard to go back to 1.5 and sdxl. Once you had flux, you can't go back..lol.. Jokes aside, this is from the ex Stablity AI crew, it is SD3 reborn and better. THIS FLUX IS ADDICTIVE AF!!! and loving it!

    Legolas_son_of_LegoladAug 5, 2024
    CivitAI

    Haven't used this model yet (dunno if it's worth trying with my 6GB 1660) but I like what I'm seeing.

    HOWEVER

    It does seem that the model suffers from a bad case of same-face syndrome.

    GRM80Aug 5, 2024· 1 reaction

    As far as I know, you need at least 12g to make it work. I have a 3060 12g . and couldn't make it run with Comfy and it was the low detail model version of Flux. So, no it won't work on a 6g.

    TheP3NGU1NAug 5, 2024

    You can get it to work on as low of a 4gb system (running on lowvram mode) using the fp8 models (https://huggingface.co/Kijai/flux-fp8/tree/main) but it will be slow, very slow. The Schnell model would be best, as it works well at just 4 steps (saving you time), then you can then just upscale it but either will work.

    DD_Ai_artAug 5, 2024

    @TheP3NGU1N may i ask, what you recommend, if any, to run on 8gb 4060 with 32gb system ram? Is it possible? Thank you!

    TheP3NGU1NAug 5, 2024· 1 reaction

    @ddamir247931 with the fp8 models (see link in my other comment) & comfy in lowvram mode, you should be fine I think. You are probably on the minimal side of normal ram however, since the clips run on cpu/ram during the process. That may be the only issue.

    Just don't expect things to be fast.

    DD_Ai_artAug 5, 2024

    @TheP3NGU1N tnx, i will try, i dont expect speed, but to try something new, i kinda got bored with sd(xl) and pony, to many generic images, and hard to create something that is nost just random itteration of image(s) that i already created....

    celestialFelineAug 6, 2024

    You can run the fp8 model with 11gb in lowvram mode, but it's hella slow. Like 30-70 minutes slow 6gb would be a nightmare..

    DD_Ai_artAug 6, 2024

    Dev fp8 model on 4060 8gb - 20 minutes, 512x512, 20 steps, 63sec/it.

    I was expecting slow generation, poor performance, but, in range 4-5 minutes per image, 20 minutes of waiting on mid range gpu - there is no image worth of that waiting.

    Unusable, i'll try schnell model, but, not optimistic, this is only for high end xx80 and xx90 cards.

    EDIT: Strange, gpu usage is almost constantly below 20%, with just a few peaks to 100%, and usage of system ram is only 15gb. Something is off?

    KawakiuchihaAug 5, 2024
    CivitAI

    did they added SD support?

    GRM80Aug 5, 2024

    No, it's comfy only!

    TheP3NGU1NAug 5, 2024· 2 reactions

    It will come eventually, there is no technical reasons why it can't work the weubuis just need tweaking. Give it a few days to a week or two and I bet a1111/forge/reforge will catch up.

    fronyaxAug 6, 2024

    It's Comfy and SwarmUI only currently.

    If you like A1111, you'd love SwarmUI, it's comfyui with beginner friendly UI.

    MadMadoxAug 5, 2024
    CivitAI

    I have an error. Yesterday it worked like gold. Today I fire it up and the images generate all grey. No matter what settings I take, it doesn't work. I am not using anything NSFW, prompt short. Has anyone had the same? I guess I'll have to download again....

    jonathan_zakkAug 6, 2024

    try refreshing models, brother, it works for me

    reallucifer13Aug 6, 2024

    this model is the model you want to figure out soon. i just fluxed all over the place.

    2385987Aug 5, 2024
    CivitAI

    I installed DualCLIPLoader, and set the parameters t5xxl... And Clip_I, but I can't set "Type" to flux, there are only sdxl or sd1. What am I doing wrong? + my comfyUI doesn't see ae.st, even though I placed it in VAE

    Holographic_Chimeras_AIAug 5, 2024· 2 reactions

    Use custom node ComfyUI Manager and hit to the Update ALL button (if you do not have it you can do it manually as the comfyUI readme file guides you). This would probably update DualCLIPLoader and as soon as you restart the UI you should see flux choice. Then, change the filename ending to safetensors when it's needed (e.g. ae.safetensors). Make sure you put the models in the UNET folder and not in the checkpoints. This would resolve your issue

    4598756Aug 5, 2024

    Just update comfyui, i had the exact same problem before updating

    reallucifer13Aug 6, 2024

    That was happening to me at first when I first installed. I restarting my computer a few times and that worked for me.

    4345309Aug 5, 2024
    CivitAI

    I look at your work and the ladies' faces based on the same scheme and shape are quite boring. Now is this a shortcoming of the model or do you need to type in a bunch of specific text to give it a unique face?

    3041519Aug 5, 2024· 2 reactions

    Just describe a face that isn't the default and you'll be fine.

    DD_Ai_artAug 6, 2024

    yes, i also noticed same face, unless you are creative about what that person looks.

    Mumu1188Aug 5, 2024· 1 reaction
    CivitAI

    Had bumped into a weird situation when using img2img, higher resolution with high denoise will get blurry image, anyone?

    Checkpoint
    Flux.1 D

    Details

    Downloads
    254,960
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/2/2024
    Updated
    5/14/2026
    Deleted
    -

    Files

    flux_dev.safetensors

    Mirrors

    HuggingFace (130 mirrors)
    CivitasBay (1 mirrors)
    TensorHub (1 mirrors)

    flux_dev.safetensors

    Size:
    22.17 GB
    SHA256:

    Mirrors

    Available On (4 platforms)

    Same model published on other platforms. May have additional downloads or version variants.