CivArchive
    Preview 32840527
    Preview 32844006
    Preview 32844124
    Preview 32844304
    Preview 32844535
    Preview 32844798
    Preview 32845063
    Preview 32845139
    Preview 32845357
    Preview 32845369
    Preview 32845881
    Preview 32846543
    Preview 32846666
    Preview 32847255
    Preview 32847641

    Please check out the Quickstart Guide to Flux for all the info you need to get started!

    FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. For more information, please read our blog post.

    Key Features

    1. Cutting-edge output quality, second only to our state-of-the-art model FLUX.1 [pro].

    2. Competitive prompt following, matching the performance of closed source alternatives .

    3. Trained using guidance distillation, making FLUX.1 [dev] more efficient.

    4. Open weights to drive new scientific research, and empower artists to develop innovative workflows.

    5. Generated outputs can be used for personal, scientific, and commercial purposes as described in the flux-1-dev-non-commercial-license.

    Usage

    We provide a reference implementation of FLUX.1 [dev], as well as sampling code, in a dedicated github repository. Developers and creatives looking to build on top of FLUX.1 [dev] are encouraged to use this as a starting point.

    Learn More Here:
    https://huggingface.co/black-forest-labs/FLUX.1-dev

    Description

    Pro 1.1 is not available for download, however we currently don't have a way to support a model for generation only.

    FAQ

    Comments (172)

    SilmasOct 4, 2024· 2 reactions
    CivitAI

    Pro 1.1 blows me away!!!! Incredible!

    QH96Oct 4, 2024

    what is pro 1.1?

    SilmasOct 4, 2024· 2 reactions

    @QH96 Flux.Pro 1.1 released today. :)

    mirek190Oct 5, 2024· 1 reaction

    no opens course so ...

    VestuOct 6, 2024

    @Silmas How to use it?

    LemonSparkleOct 9, 2024· 2 reactions

    @QH96 2% better officially ^_-

    SilmasOct 9, 2024· 1 reaction

    @LemonSparkle e It might be, but I use it 200% more now :)

    LemonSparkleOct 9, 2024· 1 reaction

    @Silmas I use Schnell lots, because it's not too bad (and I can use it external, and cause I'm poor :x)

    SilmasOct 9, 2024· 1 reaction

    @LemonSparkle You are right, Pro is expensive, I use it directly via my API Client, each image costs 4 cents.
    Quality wise it is very strange, sometimes the images on Flux.Dev are even better than the one of Pro.
    Schnell is a clear downgrade to the mentioned models, but is OK, as you said.
    An alternative is ZavyChromaXL which is quite good and has more flexibility.
    I think you could use Dev too, when you try it with Forge.

    LemonSparkleOct 9, 2024· 1 reaction

    @Silmas I've used Dev a couple times on the huggingface demo page, but if/when I can get GPU allocation it basically uses up all my GPU time for the next hour or so, so that's it, maybe one pic per hour if it even gives me allocation which is hard to get with it being so popular.

    SilmasOct 9, 2024· 1 reaction

    @LemonSparkle If you don't spend money on it, hardware, BFL API or rent a machine, you have to watch Ads on CivitAI, 1 Pro image each day. Flux is pay to get Buzz. :D

    0l1v1aR0551Oct 5, 2024· 5 reactions
    CivitAI

    Pro 1.1 very often generates bad hands with fused or 6-7 fingers (tested here on CivitAI on-site image generation)

    LazmanOct 31, 2024

    Gotta suck to pay for those results. I'll stick to local generation.

    infrezz721Oct 5, 2024· 5 reactions
    CivitAI

    I wonder what the minimum system requirements are.

    VigoVonHomburgOct 6, 2024· 2 reactions
    CivitAI

    Any tips to speed up generation process on RTX 3060 12GB? Using 20 steps, Euler, CFG 1 it takes 3-4 minutes per image.

    NullByte45Oct 6, 2024

    What size of image are you making?

    VigoVonHomburgOct 7, 2024

    @KeywordBattle 896x1152

    anduxOct 7, 2024

    make them 1152x768 a perfectly balanced resolution. and i'd do 25 steps in dev with 3.5 cfg but this won't speed up stuff

    NullByte45Oct 7, 2024

    @VigoVonHomburg Like @andux said I would change the image size. This is what I use
    Image size:

    1024 x 1024 (square)

    832 x 1216 (landscape/ portrait)

    1344 x 768 (16:9)

    I do 20 steps, Euler - Beta, CFG 1 and it takes about 30 seconds with a RTX 4070 Ti Super (16gb)

    Rekano33Oct 7, 2024

    @KeywordBattle Can you tell me, do you have generation taking 30 seconds from the moment you click on the queue? Or exactly from the moment the first generation step starts? Because on my 4080super it takes 40 seconds from the moment I click on the queue....

    NullByte45Oct 7, 2024

    @Rekano33 The moment I click queue it takes on average of 30 seconds.

    Rekano33Oct 7, 2024

    @KeywordBattle Are you using ComfyUI or another interface?

    NullByte45Oct 7, 2024

    @Rekano33 Comfy ui

    VigoVonHomburgOct 7, 2024

    @KeywordBattle Can you please share your workflow?

    NullByte45Oct 7, 2024

    @VigoVonHomburg Simple workflow nothing special. I got it from here
    https://stable-diffusion-art.com/wp-content/uploads/2024/08/flux1-dev-fp8.json

    VigoVonHomburgOct 7, 2024

    @KeywordBattle So, are there any more necessary steps that I need to do after installing ComfyUI and applying this config? I am totally new to this, I am using WebUI Forge.

    NullByte45Oct 8, 2024

    @VigoVonHomburg Replied in a DM. If anyone needs help reach out in DM.

    firecat6666Oct 8, 2024

    @VigoVonHomburg This probably won't help you diagnose the problem but for reference, I have the same GPU, same config as yours, Forge WebUI and one 1024x1024 image takes between 1.5 and 2 minutes for me (can't remember the exact number off the top of my head).

    Also, on the top settings in Forge WebUI (the stuff at the very top of the web page), I have Diffusion in Low Bits: Automatic, Swap Method: Queue, Swap Location: CPU and GPU Weights (MB): max value. Now that I think about it, I remember having to change one of these values from the default so the program doesn't eat up all my 32GB of RAM but I can't remember which one I had to change.

    sevenof9247Oct 9, 2024· 1 reaction

    it depends strong on the model and how fast is your SSD (NVME SSD)

    this is the fastes i tryed so fare in s/it

    https://civitai.com/models/638187?modelVersionId=721627

    only 8GB VRAM usage and no additional VAE, CLIP, T5XXL

    30steps (1024x1024)
    10sec until first iteratio, all in all 80sec on my 4060RTX(16GB)

    LemonSparkleOct 9, 2024

    @sevenof9247 only 8gb? Wow, not bad, now if only I could get it running on CPU, I got a spare halfway decent PC running attached to my TV (for like Netflix and stuff), I would totally set that to output a batch of images overnight, even if it took half an hour per image it wouldn't be too bad if I woke up to a dozen image attempts or more.

    tedbivOct 7, 2024· 3 reactions
    CivitAI

    is the 22GB flux-dev model really fp32 or is it fp16?

    condzero1950Oct 11, 2024

    I suppose it's comparable to fp32 it's listed as bfloat16.

    Le_FourbeOct 20, 2024

    it's 23.8Gb on hugging face. and it's fp16
    i assume fp32 would be around 48Gb but only high end server GPU would be able to run that

    tedbivOct 22, 2024

    yeah, i tried creating an fp32 version and it is ~46GB. i wonder if flux1-pro is fp32? it sure makes noticeably better images, at least on civitai...

    condzero1950Oct 31, 2024

    @tedbiv Pro seems to be better based on what I'm seeing.

    civitai_acOct 9, 2024· 2 reactions
    CivitAI

    anybody knows, how to make image of a man with x shaped hands?
    my task was to depict a man who crossed his arms in a sign of prohibition, as if he were saying "forbidden", "no". But alas, no model has coped with this. The only option left is with ControlNet and OpenPose

    TheP3NGU1NOct 9, 2024

    A solid descriptive prompt would be a good starter but would be pure luck if it did it each time.
    Controlnets imo will be a hit/miss.


    Complex poses like that you might get away with a img2img of a existing photo. Denoise somewhere around 0.6 probably.


    Best way; a lora.
    Use something like DallE to get you the reference photos. Create the lora with the onsite trainer. Probably use that lora to then make more reference photos using flux.. and make a version 2 of it for final tuned results.

    This is the generalized way people make new concept/complex loras.

    condzero1950Oct 11, 2024

    Took me a few tries, but I think I managed to create something like what you are looking for. Used a custom scheduling_flow_match_dpmsolver_multistep designed for FLUX.

    Prompt: a man who crossed his arms in a sign of prohibition, as if he were saying "forbidden", "no".

    CFG: 6

    Steps: 40

    DPM++ 2M SDE

    seed = 8026638075122489000

    I published the image under my name, so you can look for it and see for yourself.

    Tozi_WhiteOct 10, 2024· 8 reactions
    CivitAI

    What is currently the best version of Flux possible to use locally for a 16 gb vram GPU?

    LemonSparkleOct 10, 2024

    And has anyone gotten it to run just on CPU? You know like one of us poors :D?

    mattratcathat967Oct 11, 2024

    I run Dev on a 4080 which is 16 GB. About 30 seconds for a 896 x 1152 with 20 steps.

    ShimakazeEXOct 10, 2024· 2 reactions
    CivitAI

    Cool!

    marguerreOct 11, 2024· 2 reactions
    CivitAI

    Fantastic tool. Thank you.

    Desmond350Oct 11, 2024· 8 reactions
    CivitAI

    I cannot get this model to run and I'm using comfy UI. I put it in the checkpoints folder. Why it won't run?

    Elijah93Oct 11, 2024· 1 reaction

    I have the same problem

    TheP3NGU1NOct 11, 2024· 1 reaction

    you need to put it in your unet folder, not checkpoint.

    Desmond350Oct 12, 2024

    @TheP3NGU1N okay I'll try that

    Desmond350Oct 12, 2024

    @TheP3NGU1N Is there a video tutorial on how to use unet models? I still don't know which node I'm supposed to connect to which node.

    TheP3NGU1NOct 12, 2024· 1 reaction

    @Desmond350 tons of tutorials around, youtube or even here on civitai, tho i would suggest just finding a workflow that has what you need, applying it in comfy, get any missing nodes (using "comfy manager" -- something if you don't have get it, right now) and enjoy. After that point is just a matter of having your checkpoints and loras (if you have any) in the correct folders.

    Desmond350Oct 12, 2024

    I followed insturctions for installing the manager based on a video but I don't know why the manager button is not showing up after I installed it. I followed all the instructions... (Wait I noticed I have to install git... I will come back and edit this again after I'm done resolving the issue)(I installed successfully now but Idk how to use it, the file is still in the unet folder and I still cannot run flux)

    LemonSparkleOct 12, 2024· 6 reactions
    CivitAI

    I hope Civitai implements Flux Schnell Lora training, I use it lots, but it's almost all Dev Loras on site, it's possible to make Schnell Loras, there are even a few here, but they need to set it up to make them! Schnell can be really good, also it is useful to let it do more than 4 steps sometimes (something also not available here on-site).

    LazmanOct 31, 2024

    I don't get why people give corporations so much money to rent services, when they could put that money away, upgrade their PC, and run the same services locally without limitations.

    LemonSparkleNov 26, 2024

    @Lazman I'd love to do even more locally if I could afford to, but I'm going to need about three-fifty. 🐲
    lol xD

    LazmanDec 8, 2024· 1 reaction

    @LemonSparkle I managed to scratch together enough for a decent PC for AI, but right now I've gotta watch what I spend cuz I'm only living on disability, and maintaining enough money to eat about 2 thirds of what I should be eating to remain healthy, while not depriving myself of the odd luxury, is not easy.

    I'm considering buying a minifridge for my room where I spend all my time, but If I do, that means I got about 100 bux (CAD) to last the last 2 weeks of the month for food.

    I had a look at your profile there in the link, says you're an artist and getting into AI art. are the images on that page from your art or AI art? If from your art, then that's a decent level of skill. I'd stick to the art, and maybe just use AI to polish it a bit if needbe. Or if you've got a good amount of it at that quality, use it to train a lora to produce your art style. In general when it comes to art, unless you can get jobs from indie game devs or website designers looking for graphic design elements for their site, there probably isn't much money in it outside of the porn industry(at least, that I can think of).

    AI art might sell some, but since these stupid sites exist that anyone can log in, and with zero effort, and effectively no setup hassle, utilize millionaire hardware to produce top tier images. I honestly don't see any money in AI art at all besides the porn industry, and even then, it's so diluted that you may only make anything in the niche fetishes.

    Real art still has some appeal, due to making characters with real hands that have 4 fingers and a thumb, rather than mangled clumps of flesh, lol.. Also, there's a lot of general inconsistencies in much of AI art, that many people find offputting. Like, how it doesn't understand the basics of physical continuity. Like, if your character has a tail, and the tail goes from the tip, up to the arm, but then doesn't actually continue to the rear, almost making it look like the tail is coming outta the arm.

    Much of the issues come from the fact that the original makers of the models (the millionaires/billionaires with the hardware good enough to train the big ones), decided to use square images for some reason.. despite the fact that almost literally no image that people want to generate is going to be square. So, the model is left to reinterpret much of it's training data when producing portrait or landscape images.

    Side note: I also kinda gave up on artistry since getting into AI art. Granted, it's never come easy to me because I judge myself so harshly and have such a high fear of failure that I often found it more stressful than relaxing. When I first got my graphic tablet several years ago, I did find it very relaxing. But in recent years, life = stress, so it's been more difficult to gravitate towards relaxing activities.

    I first hated on AI art, cuz it's gonna destroy people's ambitions for being artists, and cripple yet another of the more human parts of our society. But then I decided on a whim to try it out on my computer back just before September. At first, I got heavily into it for the first couple months, even began staying up 38 hrs at a time. Then toned it down a bit. There's a lot to learn when it comes to AI and AI art, it's an interesting and diverse topic.

    I still think AI art is bad for society, but I don't demonize it the way I once did. I do kinda wish the site stuff didn't exist though, cuz it increases the flow and over-saturates the market. Also, even for being AI art, it seems like a cheat. Cuz people like me spend months learning AI and setting up the programs and figuring out the workflows, etc. Just to have someone come along and dump 10 bux into a website and generate superior images cuz they're running flux dev on 100 A100 GPU's that cost 24-40 grand each, and the setup is already professionally optimized and configured..

    LemonSparkleDec 9, 2024

    @Lazman I can definitely do art, but I do use AI polishing now, I'm still super slow making images (always have been), but I can do an absurdres size images that used to take me 2 weeks in just 2 or 3 days now (or still 2 weeks if I'm being super autistic about single pixels nobody will ever see but me at 800% scale... again/still). I've been working on loras in my style, which is why I've only gotten one out in a month, while I see some people are pushing out 10+ loras a day.

    But I haven't ever made much money with my art in any event, (or the writing much recently for that matter, taking English and art in college sure worked out great for me and I'm def not crying rn .-. )...

    Anyway I guess if I had made any money from art maybe I'd have more than $7 in the bank right now. I can't fight AI, so gotta find a way to use it somehow. I have food and a roof over my head for the moment though, so not quite homeless yet, so I need to try to do something while I still can and still have my head above water, if only barely.

    LazmanJan 6, 2025

    @LemonSparkle Sorry for the late response. Discovered recently that there's a good chance I have dissociative derealization disorder. That name is a bit convoluted though. If I had to name it myself, I'd have called it "Lost in space syndrome". Cuz that's basically what it is. Weeks/months can go by and I'll just entirely gloss over things like paying bills, or responding to people online. It's not that I've completely forgotten about such things, but more like having a different concept of time. Though the disorder itself is characterized by, feeling foggy, or like you're on autopilot. Like, you can control your actions, but it's an almost forced conscious decision to do so, and everything else is just routine.

    If I had to guess, I'd say it's most likely caused by a lack of meaningful connection to anyone or anything around me.

    Anywho; yea, I had the same issue with art. Perfectionlism.. It'd take me literally a month to finish a single image, and I still only finished a few. Got a bit better near the end, but then, AI, lol.

    "Autistic" Lol.. same. That seems to be more and more common these days. Maybe just that people are finally willing to admit it. It was pretty badly stigmatized in the past as being some form of mental retardation(PS: always hated that word..).

    "800% scale" LOL.. Me; when I'm wondering why pictures I took on my Samsung Galaxy S 23 ultra look like trash, but meanwhile I'm zoomed in on a person's face from across the street.

    Worst thing for me with art though, was eyes, anime eyes.. If you look at them long enough, none of them look right.. Even some done by professionals. Just look at Rukia from the first Bleach series.. her eyes took up half her head..

    10+ loras a day is the people using WD14 to auto-caption and not even looking at the results. If you drop some of those Loras into lorainfo.tools , you'll notice that a quarter or more of the tags are the opposite of the thing the lora was intended to produce. While art is bad to spend too long making a piece (cuz you'll cook your brain on it and effectively 'over-train' by attempting too many corrections), Lora creation is the opposite. Take your time, curate the dataset, edit the images, refine the settings.

    Eventually you'll have a Lora that is superior to any of those 10 a day people's loras. The only caveat to that is, burnout. Also very much an autistic thing, sadly(I know it is for me).

    "Art in college" Whenever I hear someone tell me that they're taking that, I'm just like.. 🤦🏽‍♂️. It's a bit of a scam course if you ask me. Don't get me wrong, there's some useful things to be learned. But it's like buying a 3000$+ non-electric bicycle when you're still renting an apartment. There's certain things that should only be done when you already have money and/or a career and own a home.

    Not your fault though, I imagine no one told you that. They probably all responded with "You go girl! Chase your dreams!" or something to that end, but they fail to acknowledge the lack of practicality and pragmatism. Cuz in our society, being 'nice' is more important than helping people to avoid potentially life destroying decisions.

    Even the teachers at the schools if you take a career course, will not tell you this. In fact, I took such a course and it was, I'd say at least 99% useless. Cuz the only maybe useful thing I got outta it was that it pushed me to look up a decent career finding website. And that website actually was pretty good, but that was something I did myself, the course was nothing but a class in self esteem. It's like if you've seen those stereotypical AA meetings where someone stands up, says their name, and everyone claps. It was pretty much 2-4 weeks of that.. And everyone was like "Wow, thx, this helped SO much!" and I'm like.. What did.. Nothing happened.. What am I missing here..?

    Sounds like you're about as well-off as I am, lol.. Well, maybe a bit worse off. RN I'm living in a shoebox-sized apartment in subsidized housing, in the same building as at least one practicing methhead.. They give him pipes for it downstairs because our society is fucked now.. It's better to enable people than to rehabilitate them.. And I worry every-time I leave this place that some bloody moron is gonna burn it down while I'm out and I'll lose everything.

    They shut off the stove in my apartment because the one guy living here that's legit part-retarded, kept cooking with too much oil, or high heats or something and filling the apartment with smoke and setting off the alarms. I know I said I don't like that word, but I mean it in a medical context. The guy is off in the head. He pisses in the shower, Instead of the toilet.. Like, not even While he's actually taking a shower.. and he collects rotten food in his room so the place is infested with fruit flies. And the staff here refused to do anything about it. And the guy wears ratty torn up clothes and goes through our trash.. I couldn't make this shit up..

    LemonSparkleJan 6, 2025

    @Lazman It's okay about the late response, I've done that before, and I wasn't expecting anything.

    An old art friend of mine told me before I shouldn't work at any more than 200%, though I still have trouble sticking to that, I do a lot in at least 400% or more, especially since I'm working with a slightly buggy 10+ year old laptop and trackpad these days and it's just hard to not do that (my mouse wasn't that much better, but it broke a while back, and I can't afford a new one, let alone one of those super awesome drawing pads that I probably could do 200% with).

    I knew art wasn't necessarily a super great choice in college, and so really I was doing English as my main thing with art as the secondary, and um... yeah, don't think I learned much from the actual classes for most of it still anyway, and really even most of the people I knew in college in all kinds of different majors (not just art or English folks) don't work in the field they studied besides maybe a couple of engineers, what was the point of it for any of us? I wonder sometimes.

    And even in art classes, you'd think the people running them would at least care about art, even if it wasn't a money maker, like they'd have some passion about it? but yeah... like I don't really think that painting art "Professor" I had actually knew anything about painting, or art, or teaching because she didn't actually teach us anything about style or technique or composition or um painting, or anything at all really, and remember out of the very little she did, she referred to any distinct art style, regardless of what it was, as just "painterly" which made me like 🫤... I'm pretty sure I learned a lot more from reading the encyclopedia as a little kid, or from youtube videos, or just trying out random stuff on my own. Maybe I should have gone to college for teaching... :x
    (Also, this one time she locked me and half of the class out of the studio while she was in there, when the bus that normally came an hour before was super late, and we arrived like just 5 minutes late to the start, "You should take the bus that comes two-hours earlier instead", ... like really? I'm sorry I was at my 30 hours a week minimum wage grocery job that I struggle to do while also going to school full-time and struggling to carry around my supplies because obviously I just didn't think to somehow BS my way into an effortless $75k/yr+ art professorship.... f#@%!^& bish, she was later than that to her own class before... 😒)

    So yeah anyway, English seemed like a decent main choice at the time, though I'm pretty sure I had that all down by the end of high school too, but now it's a few years down the road and ChatGPT and LLMs have come along out of the blue, and more and more it's taken a lot of the writing work that I used to be able to get, and people seem to just accept the meh stuff it outputs most of the time. I mean, I'm a writer but not the writing bestselling novels kind, making everyday stuff for other people was my thing, for a while anyway, but meh and free (or nearly-free) seems to beat out great and even lower than minimum wage prices (tbh).

    And I can't even fall back on the other writing adjacent work I used to be able to get on the side, like transcription and captioning because AI does that now too, it's nowhere near as good as my +99.95% accuracy (I've checked :U), but it still drove the price and work availability super low, I used to be able to freelance that sort of stuff and manage the equivalent of at least $6 to 10 an hour (or more sometimes) depending on how fast and diligently I worked, not a lot but okay-ish because I'm super frugal. But now I can maybe get the equivalent of $2 or $3 an hour, that is if/when I can actually find the work, partially because the prices people are willing to pay for it has dropped a lot, and there is a lot less out there to do now, and then also when I can get something it takes a lot longer to do because it's mostly just the stuff AI can't do when the recordings are absolutely terrible ear splitting quality, like imagine a business meeting on a construction site with 12 people (3 of them named Mike) recorded on a phone in someones pocket who keeps fidgeting with their jacket, and it's windy, and someone is using a jackhammer in the background, while a dump truck is doing donuts around them or something. And it's pretty soul crushing to spend a half a day just looking for work, and then spend another 3 to 6 hours working on something so difficult (that I used to even get paid extra for), to get maybe just 8 or 10 dollars total on a 'good day' (BEFORE self-employment taxes .-.)

    I guess we have kind of similar situations anyways, yeah. I wonder how many people would think about justifying the extra power use on decent computer (I mean, if I can ever afford one) because technically it could double as a space heater lol

    That whole career course/self-esteem thing reminds me of this one time when I was a lot younger, back in school, and they put me in a self-esteem group thing because someone had decided that low self-esteem was my biggest problem, and so I went to it a couple times, and yeah, idk if I really understood the point either, or if I really believed that singling out some kids as the "low self-esteem kids" was actually helpful... but anyway, then one day it was time to go to it again, and I remember that for once I was almost glad for it strangely enough, if only because everything else seemed to be dragging on forever and the change of scenery would have been something different at least, but then they were like, "No, you just stay here, you don't really belong in that group anyway."
    And I was just like... 😧...

    Anyway, some more bad stuff happened that made me depressed recently, and I'm still poor, and I still don't have any supporters on my Ko-fi though I've been posting highres stuff on it if I ever do get any (the 1% of my goal is just me pretending not to be so lame). But the good news is after all my bills this month my bank balance has recovered slightly to be in the whole double digits, so I guess there's maybe 10% less existential dread now? 🫠

    Thanks for that lora info link btw, I've been wondering how I could do exactly that.

    LazmanJan 15, 2025

    @LemonSparkle "self-employment taxes" Why on earth would you even register such a job with the government? The part of the beauty of freelance is supposed to be, to be able to dodge the proverbial tax man. Maybe there's some reason you don't have a choice. But, I see people every day, basically volunteering to give the government more money, that they don't have to..

    Like all these people that barely have money to get by, and still pay when they get on the streetcar. I just sit near the door, and if the transit authority get on, I get off and catch the next one. Or I quickly tap my card the few times I'm not able to make an escape. One time I even dodged my way right past one of them (almost ended up shoving the guy aside) just to get off, lol..

    But, naw, I won't give the gov a shiny red penny if I can help it. Not until they begin making it worthwhile to allow them to govern us. But, they won't even regulate the corporations that are consistently lowering our quality of life. Cuz they're getting kickbacks and payouts to not regulate them, or they remove regulations that were previously in place. So, if they need money so bad, they can go beg their billionaire buddys for it.

    "double as a space heater"

    Lol.. I know what ya mean, although, tbh; my current computer is probably about the coolest computer I've owned.. especially for being the most powerful computer I've ever owned. The chips heat up some during heavy processing, but the case itself is at or below room temp. Though, it does have 7 fans, 10 if you include the 3 on the video card, and also an AIO liquid cooler for the processor.

    https://www.youtube.com/shorts/2tNUlMZ2xz8

    Also the most beautiful PC I've ever owned, lol..

    Oh. looking back at your first message; why do you use flux shnell a lot? Isn't dev better quality? And you can get flux dev models that are close to the same size as SDXL.

    So, they decided to help your self esteem, by putting you into a group, then telling you that you "don't belong" in that group? Lol.. Sounds like some assholes idea of a practical joke.

    speaking of joke, one time when I was younger, I went to this job find class, where they help you set up a resume, and do interviews and such, and they asked me what were some positive things about myself. Well, when I was younger, I was especially bad at trying to answer that question(actually, right up until somewhat recently, cuz of self esteem issues). So, I hummed and hawed for a while, then responded with "Well, I've been told I have a good sense of humor".

    I figured that'd be enough for them to move on, but she kept poking at it "What do you mean by that?" So, I responded with "Well, for starters, my life's a fucking joke..". Lol.. This one older native woman there lost her shit (laughter), but the people running the thing weren't impressed, and I got in crap, lol..

    Glad you liked that lora tool. I actually found one since I last messaged, that can be used locally.

    https://github.com/Xypher7/lora-metadata-viewer and it does a bit better job at displaying the info. Like, the training data at the top, handy if you're attempting to train your own loras, to see what other people use, and links to the civitai downloads, and orders the tags by frequency, so you can tell at a glance which tags are used the most.

    btw, if you want to train an SDXL/Pony lora, but don't have the cash to pay the site to do it (or just don't want to pay for it), if you curate, construct, and organize the dataset, and find an easy method to get it to me, I would be willing to do the training free of charge. I need more practice anyways, and it takes me so long to get around to cleaning up my own datasets, that I have only attempted it a couple times. So it'd help me as well. I haven't learned how to train flux though. I think I maybe could,but it'd take some crazy optimization of my setup, cuz I can barely do XL atm.

    LemonSparkleJan 15, 2025

    @Lazman Unless it's something just setup over email and done in like cash somehow the freelancing platforms end up reporting on payments now, and if it's not them it's the payment processor (stripe/paypal/etc) that ends up reporting it to the government anyway, and I owe enough money already without inviting civil fines for trying any funny business with taxes...

    My computer was the coolest one I ever owned... over 10 years ago anyway, it was pretty good back then, now it's still about as good as a cheap entry level laptop (that I couldn't afford these days either), and I've had to learn how to rebuild it in the last few years with secondhand parts from ebay to keep it going (and it was not made to be easy to do that x-x;).

    I use Schnelly for a mix of reasons, it's faster and can give something pretty okay in 4 steps (even as lower res images I can scale up like that one), and for that I'm basically unlimited and fairly rapid. If you go up to a normal size images and let it go out to between 8 and 12 steps you can get nice results, even some stuff you might think would've been from Dev, I can get maybe 2 or 3 of those kinds of images per hour with Schnell (with normal sizes and 8 to 12 steps), with Dev it's less than one per hour, and then especially for 2d animated stuff I don't think most people could tell the difference between say a 9 step Schnell image and a 28 step Dev image. Also, I found during the textacular contest that, at least to me, Schnell seemed a little bit more compliant at placing text where I wanted when asked (though it does struggle with getting it clearly rendered when it's set at just 4 steps and low res).

    Then yeah, maybe one day if I'm not completely poor, and could afford some used hardware to do it, I'd maybe want to make my own model based on Schnell, despite the limitations I kind of like it, and the license for it is basically open (Apache) without any of the limitations that Dev comes with, I'm hoping people like AbstractPhilia will be able to crack the NSFW censoring that was burned into parts of it though because I do make a bit of ecchi stuff from time to time, though my PG stuff is always way more popular lol. I think I'm probably better than average at things like aesthetics and I've been figuring out more and more what to include or not in a dataset and how to tag or caption it in a way that makes it more functional and not broken (I've been reading a lot and pestering knowledgeable people for details ^^;), so I think I could make a good model if I worked at it a lot.

    Thanks for the training offer, though training is the one thing I'm probably okay with for now. I am free on here, but I do get some daily buzz (from likes etc) and I've got enough saved for at least a few loras now, mostly because I've been slow working on them. I mean the Flux loras cost a lot, the one Flux lora I've done so far just to try it out was like four thousand buzz or something, but the others for like SDXL/Pony aren't too bad, I think like a quarter that maybe, and I've gotten some buzz tips and stuff too which has been pretty cool of people. For loras on civitai it's mostly that I just need to get back to working on the training images really, you know stop procrastinating and getting distracted by stuff like always.

    LazmanJan 21, 2025

    @LemonSparkle Oh, shit.. That's messed up.. The gov be shaking people down hardcore, even people stuck on low-pay gig work..

    "I owe enough money already without inviting civil fines for trying any funny business with taxes..."

    Let's be clear, you down owe them a damn thing. They're using fear tactics to rob you of your hard earned money. All while they continue to deregulate corporations which makes your life worse. And if you live in America (you mighta told me, but I'm half asleep atm, and don't remember), you don't even get the basic human right to free and accessible health care. So like, what are you paying taxes for? So they can build roads and power lines going out to new unaffordable housing that you'll never be able to afford to live in?

    Sorry, I don't mean any of that negativity at you. It just frustrates me to all hell, how these governments low-key ruin our lives, then attempt to brainwash us into feeling indebted to them for the privilege of being alive.

    Recently, I've been learning methods to turn 2D images into 3D/photoreal. It's pretty neat to do with furry characters, that end up being this perfect mix of real, but without looking strange/uncanny valley. Still trying to refine the process though, and figure out a formula that will work right the first time for every pic. Cuz it feels like I had it working right in my other workflow, but using the same settings in this WF with a diff image, and it's not working the same.

    It doesn't help that I've been experimenting with a bunch of different controlnets, inpainting, outpainting, ipadapter, etc. So, I basically need to train myself like a model until all the pieces fall into the correct places, so to speak. But yea, AI is fascinating stuff given that the way it works can, in so many ways, be directly compared to how the human brain works.

    XodimOct 13, 2024· 1 reaction
    CivitAI

    Great model!

    akosOct 16, 2024· 5 reactions
    CivitAI

    is this unet model existing? : flux_dev8.safetensor

    aria1996Oct 16, 2024· 3 reactions
    CivitAI

    doesn't it work on automatic1111?

    101033Oct 16, 2024· 6 reactions

    Doesn't seem to be a priority for them but moving onto FORGE as a UI is largely painless (one or two old extensions are iffy), it works just like the usual front end but has the option to flip into using Flux, it's also a lot faster for SD and XL. Deffo give it a go and going back would be as easy as copying your TI/Lora/Checkpoints over though once you see how quick SD/XL go I doubt anyone would.

    TakodanOct 20, 2024

    Can't get it to work on A1111. It only produces gray images.

    MrCudouOct 21, 2024· 1 reaction

    @admiral_underpants This. I just made the conversion to FORGE (and was able to link forge to my A1111 folder so no need to copy anything over). It has the same UI with tons more features, and now I have all SD versions and Flux together on one app. 10/10 would recommend if Comfy UI is a bit confusing to you

    hesabimNov 3, 2024

    A1111 development has slowed down. I switched to Forge, which is basically the same, but is actively developed.

    ground_controlDec 19, 2024

    @MrCudou does the extensions work there ? also can you say how you linked your forge to the a111 folder please ?

    Marshall66Oct 19, 2024· 9 reactions
    CivitAI

    Please Help How can i use the FLUX model on my local PC after i download it from CIVITAI??

    tedbivOct 22, 2024

    install stability matrix then use it to install forge webui

    CaioGustus1618Oct 19, 2024· 2 reactions
    CivitAI

    Can a RTX 4060 ti , 8gb RAM run the fp8 model?

    TheP3NGU1NOct 20, 2024· 1 reaction

    flux can be ran on a 6gb system.. its all a matter of how long are you willing to wait for a image.

    epsbmtpgi62481Oct 20, 2024· 1 reaction

    I thought about it for a long time, and finally I chose to buy 4060ti-16G

    Friends, I really recommend buying 16G, not 8G.

    Le_FourbeOct 20, 2024

    @epsbmtpgi62481 exactly,

    to me the 4060TI 8G is such a waste. it even make that rippoff of a 4060TI 16Gb a good value as long as you do AI occasionnaly. so much trouble avoided

    epsbmtpgi62481Oct 21, 2024· 1 reaction

    @Le_Fourbe You are right. I consider the needs of future changes, so I buy 16G. But as long as VGA is cheaper, 16G is the first choice. It is worth it.

    Dan81maiOct 24, 2024

    I have a 4060, no ti and I can run it just fine but for better performance I recommend getting the GGUF model, it's a lot faster and it's the same, it's just a quantized model

    cremygoodness44335Oct 25, 2024· 3 reactions

    @epsbmtpgi62481 i got an 8 year old 1060 with a measly 6gb, i just ordered my 4060ti with 16gb, its gonna be a whole new world for me, thanks for the recommend

    Le_FourbeOct 25, 2024

    @cremygoodness44335 welcome to a new world !
    don't forget to check the amount of VRAM you use oon task manager regularly ! each time you look will tell you that you made the right decision X)

    epsbmtpgi62481Oct 25, 2024

    @cremygoodness44335 In games and AI applications, 4060TI performs very well. My old VGA card is also .

    BzzzDarklordOct 20, 2024· 9 reactions
    CivitAI

    I am very grateful to every contributor on civitai, but please, people, when you make loras for Flux use only high resolution loras, or you just killing it

    SilmasOct 20, 2024· 1 reaction

    Many of the LoRA for Flux are not very good indeed, even the most liked here on CivitAI have their drawbacks, and training LoRAs with 20 images seems not a serious attempt to create something good.

    Le_FourbeOct 20, 2024· 2 reactions

    i backup this. it's not because you can train 512*512 that it will make a good product !
    theses ppl just want to upload for the sake of attention to me.
    storage out there is not free but here we are with unreliable ressources all over the place.

    LazmanOct 31, 2024· 1 reaction

    @Le_Fourbe I'd still take a crappy lora that's rare but does something I need it to. But I agree that the site could prob do to purge some loras. Like, I don't go looking for it myself, but I imagine idiots have probably uploaded 10'000 big boob loras. Those could be cleared out and prob free up half the servers. I mean, just anything redundant like that, that the models can do without a lora.

    2DisnotPhotoRealisticOct 20, 2024· 3 reactions
    CivitAI

    Previous SD models had a problem where the quality of photorealistic faces would drop when generating images of a person slightly turning their head from the back. I’ve noticed that the issue is still present in Flux, though to a lesser degree. This becomes even more noticeable when using LoRA. Other than that, it’s truly impressive. There's no need to merge models, and to be honest, you don't even need LoRA. Just with prompts alone, you can produce a wide range of high-quality images. (I only create photorealistic images, so my feedback is based on that.)

    LazmanOct 31, 2024

    Loras are mostly to get around the retarded limitations that prevent you from doing more interesting things with the model. Like, any form of violence or destruction (and by connection, most forms of action scenes in general).

    sevenof9247Oct 21, 2024· 2 reactions
    CivitAI

    why pruned dev-fp8 16GB

    usual 11GB ?!?

    do i need ae, clip and t5xxl ???

    Dan81maiOct 24, 2024

    The full model is 24GB. If you want a smaller model, you can use the GGUF ones found here https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main the Q8 model is the same as this one. The other ones are for lower VRAM. The GGUF models need to be placed in models/unet and if you are using ComfyUI you need to use the node Unet Loader instead of Checkpoint Loader.

    t5xxl: https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf/tree/main

    clip: https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/tree/main I use the smooth one but you can test and pick which one is best for you

    vae or ae from here: https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

    LazmanOct 31, 2024

    @Dan81mai I'm using comfyui, and for the life of me, I cannot get any flux models from this site to work. I got the main flux model from hugface to work, but even trying to load a flux from this site into the same workflow is a NOPE! No matter what I do, when I hit queue, I get..

    Error(s) in loading state_dict for Flux: size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 64]). size mismatch for time_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]). size mismatch for time_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]). size mismatch for vector_in.in_layer.weight: copying a param with shape torch.Size([1179648, 1]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for vector_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]). size mismatch for guidance_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]). size mismatch for guidance_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]). size mismatch for txt_in.weight: copying a param with shape torch.Size([6291456, 1]) from checkpoint, the shape in current model is torch.Size([3072, 4096]).

    I even tried loading the same workflow as one of the model uploaders by using one of their image samples, and still got that garbage spam output. The actual output is 100ish lines or so, but it's mostly the same junk.

    Dan81maiOct 31, 2024

    @Lazman Are you referring to Flux Lora's? You have to match the Lora type with your Unet/checkpoint type. If you check a Lora model page, you will see on the right side Base Model, you have to pick the Lora's that are Flux based. SD, SDXL, Pony ones will not work with the Flux base model/unet/checkpoint. This is why you get mismatch error.

    LazmanOct 31, 2024

    @Dan81mai You're not wrong, but you're way off in my case, cuz I'm actually not an imbecile. I learned that stuff less than a week after getting into this stuff. I was using a flux model with it, my difficulties was in the lack of comprehensive tutorials when it comes to using flux. the tutorials are all mostly like 'load flux, and it just works, NP!'. No one thinks to mention anything about how for flux dev you need to use several different nodes for the Ksampler and model loader alone, and a couple/few unique nodes on top of that.

    Or anything about the NF4 node launcher, which I only stumbled upon last night when doing random searches on the comfyui manager, and otherwise haven't heard a thing about from anyone. If I'd known about the complexity of the workflow for the flux dev model, or that the NF4 loader existed, I wouldn't have even been on here asking questions.

    AflexgNov 9, 2024

    @Dan81mai  Could you explain me, please, how to run it on forge ?

    I got error : ValueError: Failed to recognize model type! if i try to use any of this: t5xxl: https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf/tree/main .

    And I also got error: AssertionError: You do not have CLIP state dict! if i try to use this: clip: https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/tree/main

    Dan81maiNov 9, 2024

    @Aflexg I don't use forge but I found I guide on the official forge github about GGUF, see here https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050

    LazmanNov 10, 2024

    @Aflexg IMHO, use comfyui, it's better, so long as you get comfy manager, or a couple good node packs from git-hub.. The amount of flexibility in the comfy workflows is insane. I've been using it for months now, and have still barely scraped the surface.

    PuhiMasterOct 24, 2024· 6 reactions
    CivitAI

    I really love this model

    cityh7202323Oct 25, 2024· 11 reactions
    CivitAI

    i got error : AssertionError: You do not have CLIP state dict!

    cityh7202323Oct 25, 2024

    Can someone help me?

    AkalabethOct 25, 2024· 2 reactions

    @cityh7202323 Youn need this files to work with FLUX: ae.safetensors, clip_l.safetensors, t5xxl_fp16.safetensors (or t5xxl_fp8_e4m3fn.safetensors).

    https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main
    https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

    What software do you use? If you use Forge, you can follow this guide https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050

    zhenyapestunOct 31, 2024

    @Akalabeth what to do next with those files?

    5432798Nov 1, 2024· 1 reaction

    @zhenyapestun 

    ae.safetensors goes into models/VAE

    t5xxl and clip_l go into models/text_encoder

    at least thats how it is with forge im not sure about comfy or whatnot but it should be similar

    MagicB33Oct 26, 2024· 8 reactions
    CivitAI

    I keep getting stylized results, even when copying the prompts form the ready pictures on civit. how to achieve fotorealistic results?

    azidahakaNov 1, 2024

    If you find a way let me know too >_> looks like it's great but can only make painting and anime for me?

    djbrown79575Nov 14, 2024

    Sometimes specifying the type of camera and lens will yield more realistic results. Try adding this to your prompt:

    Canon EOS 5D Mark IV, EF 24-70mm f/2.8L II USM

    azidahakaNov 17, 2024

    @djbrown79575 in the end the issue was having txxl5 vae, clip and the other one active... these are MANDATORY for realistic output

    Eastern_Layer_3898262Oct 26, 2024· 4 reactions
    CivitAI

    How to implement

    XigossOct 27, 2024· 2 reactions
    CivitAI

    I work in comfyui.

    My file flux_dev.safetensor is in the unet folder of my models... But I can't have it in a UnetLoader.

    Please, any idea ?

    TheP3NGU1NOct 27, 2024

    First thing to always check: update comfy/nodes

    Dan81maiNov 3, 2024

    Yes, Unet is only for GGUF versions of Flux. So if you want to use Unet, get a GGUF version of Flux

    BibolozOct 28, 2024· 8 reactions
    CivitAI

    The best model on CivitAi. Costs a ton of buzz but it's worth it.

    The_Apprentice_Nov 9, 2024

    Run it on Stabillity Matrix. then it costs zero.

    emiliooh69Oct 28, 2024· 2 reactions
    CivitAI

    Can I train it on a 8gb vram?

    Easy_Nov 1, 2024· 1 reaction

    You need some crazy computing to train, so I would say no. 12 is probably the lower bound, coming from someone with very little experience.

    Dan81maiNov 3, 2024· 4 reactions

    You can train on 8GB VRAM. I got a 4060 and I trained my lora with it. 10 epochs with ~ 2200 steps took about 6h

    knfelOct 28, 2024· 2 reactions
    CivitAI

    Someone help me, I want to use flux, I have a 12 gigabyte 3060 and an AMD Ryzen 5 pro 4650 g processor, it is possible to move flux with that and if possible, I need you to go, thanks for the help, I'll start in this world.

    fromnovelaiOct 28, 2024

    Yes, easily. I have 8 gb 4060 and it works perfectly

    kantoOct 29, 2024

    use gguf q4 version with comfyui gguf loader

    TheP3NGU1NOct 30, 2024

    Yes, you can run it. You're not going to win any speed races, but it will work. If you want to speed things up with a very small change to quality, fp8 model or a gguf model is the next option.

    Dan81maiNov 3, 2024

    If you have 12GB VRAM, you can even use Q8. I have 4060 8GB VRAM and I use Q8, even though it's slower, I prefer to use Q8. You can get both versions and test which one you prefer.

    jazgalaxy581Nov 14, 2024· 2 reactions

    My advice to you getting started is to do what I did. Do not ask any questions here. Find an AI chatbot that you are comfortable with like ChatGTP or Copilot (I like copilot for this) and ask ALL of your questions there. If you don't understand a term, ask it. If you try something and get an error code, copy it and paste it into the chatbot. It will tell you EXACTLY what you need to do and you won't be annoying people here by constantly asking questions and wanting people to walk you through it. (*No idea why someone "thumbs downed" me when I literally just gave instruction on how to have "someone" "sit with you" and walk you through how to get it up and running step by step...)

    azidahakaNov 1, 2024· 3 reactions
    CivitAI

    For me it always generates anime-styled images no matter the Prompt asking for realistm, photo etc, etc

    condzero1950Nov 1, 2024

    I tend to find that it doesn't always measure up to your expectations trying to get the photo realism in check for many prompts. It clearly is a distilled version of their pro.

    ZootAllures9111Nov 1, 2024· 1 reaction

    The phrase "1girl" is extremely aggressively weighted towards anime in Flux, don't use it if you don't want anime, if you were perhaps.

    XIA_LuminatrixNov 1, 2024· 6 reactions

    It's way simpler to fix than you think: Just use "The scene is depicted in artstyleyouwant"
    Replace "artstyleyouwant" by the desired art style of your picture. Always add that line at the end of the prompt.

    Some examples:
    - The scene is depicted in realistic style
    - The scene is depicted in professional photograhy style
    - The scene is depicted in flat art anime style
    etc...

    ACC888Nov 5, 2024· 2 reactions

    Do not use 1girl or any danbooru tag

    Fisherman_BNov 6, 2024· 5 reactions
    CivitAI

    sorry, Flux noob here - i see the Dev model and Schnell model for download, and "training data" for Pro. What exactly do i do with the "training data"?

    AkalabethNov 7, 2024

    This is not training data, it's just an image inside the archive. I suggest it is just a placeholder because you can't download Dev-Pro, it is not open source model.

    Dan81maiNov 10, 2024· 1 reaction

    Schnell - is under Apache 2.0 license, this provides the worst quality of images. I don't mean the images are bad, it's just bad compared to Dev and Pro but anything you make with it can be commercialized (if you like what comes out of it, I don't, in fact I hate it)

    Dev - freely available for your non-commercial and non-production use as set forth in this FLUX.1 [dev] Non-Commercial License so you can create, share but not sell anything you make with it

    Pro - is the 'paid' version and can deliver higher quality outputs, improved efficiency, and better alignment with user prompts, making it ideal for both artistic and commercial applications

    LemonSparkleNov 14, 2024

    @Dan81mai Schnell can do pretty good really if let go beyond the basic 4 steps, like this is what Schnell can do with about double that: https://civitai.com/images/36030237

    Dan81maiNov 14, 2024· 1 reaction

    @LemonSparkle Can it do this with 24 steps? https://civitai.com/images/39974540 :)

    LemonSparkleNov 14, 2024

    @Dan81mai I think it probably could actually, but I've never needed to push it much beyond 12 steps. I just made this on my first attempt with 12 steps, I could probably get closer to yours if I messed with it for a little while.
    https://civitai.com/images/40046190

    Dan81maiNov 14, 2024

    @LemonSparkle it does look better than before and I have nothing against Schnell but I can't deny that images with Dev look better and I'm not looking to commercialize my images, that is why I stick with Dev. If there is a need to switch to Schnell, I'll see at that time

    LemonSparkleNov 14, 2024

    @Dan81mai If I can ever afford a halfway almost decent computer I wouldn't mind training it as a base model, I think if I stuffed a bunch of images and captions into it then it might get pretty good, and maybe still be faster than Dev 1

    Dan81maiNov 14, 2024· 2 reactions

    @LemonSparkle I'm on 8GB VRAM and using Q8 gguf dev version and it's slowish but I'm ok with it (like ~2m20s on 24 steps with a basic sampler). There are also 8 steps dev versions out there which are fast. I mean Schnell is great with anime/cartoonish style but not that great with the semi-realistic style that I enjoy. I am doing this as a hobby and I'm quite particular about the style I like. Schnell would probably be great for pixel art too

    DukeNukem47Nov 7, 2024· 6 reactions
    CivitAI

    Anyone know wtf happened? I just updated forge webui to newest build and suddenly everything looks dogshit and I cant replicate old images

    Imc00lNov 9, 2024

    this has been the same for me for a few weeks now, funnily enough i was coming here to download it because i usually install the model from hugging face, seeing if it made a difference

    KelevraQuakensteinNov 10, 2024

    This is why I have Forge installed on two separate drives on my PC. Have you tried downgrading Forge to the previous version?

    Lorelei_VeineNov 10, 2024

    I have the same problem.

    BeefInjectorNov 11, 2024· 2 reactions
    CivitAI

    So is flux censored like SDXL in the sense of not being able to draw female nudity? In the case of SDXL, people have had to retrain the official model to support creating such NSFW images.

    jazgalaxy581Nov 14, 2024

    Yes. And you are going to have to expect that to be the case. "Serious" projects will not include NSFW as it hinders their abilities generate images without triggering NSFW content.

    TheP3NGU1NNov 16, 2024

    A strong prompt you'll get breasts and butt images but you will never get a nude crotch photo that doesn't look like a featureless Barbie doll crotch.

    BUT...

    That's where loras come into play. There are many out there now that with a little play and give you full realistic nudity.

    mrgy2130962Dec 10, 2024· 3 reactions

    It's ironic that an industry, AI, that is so reliant upon male energies, in the form of both developers and clients, is simultaneously hostile to the preferences, needs, and desires of these very same males.

    jazgalaxy581Dec 12, 2024

    @mrgy2130962 Not really. What most of the truly serious people in this space want is a monetary reward for their effort and there are really only two ways to get there, respectability and outright pornography. And pornography is hard to actually monetize.

    Marshall66Nov 13, 2024· 3 reactions
    CivitAI

    What is the different between Flux Pruned Model fp32 And Pruned Model fp8??

    LemonSparkleNov 14, 2024· 1 reaction

    Mostly size, the bigger models have a little more accurate values stored in the model but are bigger and will use more video memory, the fp8 is smaller and slightly less accurate and will use less memory(but it's still not that big a drop in quality though). @tingtingin explains it pretty well starting like 2 or 3 minutes into this youtube video: /watch?v=UyNJ-UFY-5k

    ZeusMRNov 14, 2024· 10 reactions
    CivitAI

    All I get is noise when I use Stable Diffusion and this model. Is a blue noise and then everything just goes grey

    joequick616Nov 14, 2024· 4 reactions

    I'm having the exact same problem right now. Worked yesterday. Now this. Not sure what has changed.

    boonnyb689Nov 14, 2024· 2 reactions

    By stable diffusion do you mean A1111? Most people have moved over to Forge now as it's replacement.
    If that's not the problem make sure the sampler is euler or forge realistic. I like beta the most as the scheduler

    ZeusMRNov 14, 2024

    @boonnyb689 Yep, Stable Diffusion (Original A1111) is just better for how I work with AI, but I understand if they made it obsolete for A1111 in order to work on improving its Forge counterpart. It's a shame tho, I might have to switch to Forge since it's the new king in the net

    boonnyb689Nov 14, 2024· 4 reactions

    @ZeusMR Forge looks and works just like A1111 except it's faster and works with Flux

    ZeusMRNov 14, 2024· 5 reactions

    @boonnyb689 Yep, thanks for the reply, I wouldn't have figured it out by myself. I got it running on Forge after hours of installing, setting up and learning how to use text encoders, clips and vaes. I'll leave the comment up in case anyone has the same issues in the future

    hellsbainNov 17, 2024

    @ZeusMR thanks for leaving it I was having the same problem XD

    jb8892Nov 18, 2024

    @boonnyb689 can you tell me the easiest way to transfer/upgrade to Forge since I've been using Auto1111 for a while now?

    What all do I need to send to a folder to hold like my checkpoints/loras for when I get forge up and running?

    Is it easier to just completely uninstall python and everything related to SD/Auto1111? reason I'm asking is because I don't want to have a flood of stuff sitting around if I make the transition and I haven't seen any real answers on how to do this from existing Auto1111 users.

    boonnyb689Nov 18, 2024· 4 reactions

    @jb8892 there's nothing to upgrade just follow the instructions here https://github.com/lllyasviel/stable-diffusion-webui-forge. once its done just transfer your models over to the same folder in Forge from A1111. It should have the same folder structure as A1111 and just delete A1111 if you want once it's done

    jb8892Nov 18, 2024

    @boonnyb689 sweet, thank you. Glad to hear I'm not going to end up having a total mess of files to clean up. I'm slowly but surely trying to gear up for trying out Flux 1 Dev.

    My setup is a X670 Eatx motherboard, ryzen 9 7950 X3d expo enabled, 64gb ram, 4080 super OC 16gb vram and I'm still looking for answers on which flux dev version I should try that doesn't crash my system when I try to use it.

    boonnyb689Nov 18, 2024

    @jb8892 I use fp8 even on a 3090 as 16 takes up too much resources and there isn't much of a difference

    jb8892Nov 20, 2024

    @boonnyb689 just wanted to follow up and let you know i got everything up and running perfectly as to your advice. I've bashed Flux for a long time now but I'm honestly impressed with the simplicity factor that quickly came into play when I loaded in some of my old XL/1.5 Png's to try out. Had some very amazing results, such that I've never been able to achieve so I'm happy.

    The 1 and only issue I experienced was when I try to use Hires fix for a first pass booster, it noises it out resulting in a black image but with no warning or error message. I tried adjusting every parameter and mixing things up but the same thing kept happening over and over. It works, until the very last second, then blacked out. Something I'd love to figure out because I've always had amazing results with first passing my text2img with hires fix before sending it to img2img for refining.

    boonnyb689Nov 20, 2024

    @jb8892 that's awesome!.  I feel like I'v seen that issue before but cant I remember ( I don't use hires fix much) maybe just do the git pull and restart and make sure the sampler is euler or forge realistic.
    I usually use dat 2 or latent to upscale if that's the issue but it's all magic to me so cant help you there

    jb8892Nov 18, 2024· 4 reactions
    CivitAI

    Serious question, my PC is pretty stout here's my specs

    X670 Eatx motherboard, Ryzen 9 7950 X3d processor, 64GB 6000 RAM (xpo enabled), 4080 Super OC 16gb Vram, 2 990pro 2 TB m.2's.

    Which of the 1 dev flux models should I try out if I swap my entire setup to use it with Forge?

    I've mastered creating amazing stuff with A1111, but just want to do the same with Flux at its fullest ability so what should I go for and what is a rough guess on what my setup can handle?

    SilmasNov 18, 2024

    Use ComfyUI or Forge (Forge is much easier to handle.) Your hardware should be able to do it, but smaller Flux models would make it easier for you. (Forge already delivers a smaller Flux model, just use it. :) )

    TheP3NGU1NNov 18, 2024

    Flux can work on as little as 6gb vram, it just won't be fast to generate, so you're good. Use comfy or Forge, if you want to increase speed with a small amount of loss in quality use a fp8 version or a quantified version.

    Have as much as you can load into normal ram (which comfy imo is best at). That way less is using up your vram.

    000x00000000105Nov 23, 2024· 3 reactions

    not enough. Flux wants you to use RTX 9000 series unfortunately it is not available in the market

    TheP3NGU1NNov 23, 2024

    @000x00000000105  ... its totally enough.

    gpaulelliNov 23, 2024

    lol meanwhile im running flux dev using my agony rtx 3060 12gb

    onimessatsuNov 20, 2024· 6 reactions
    CivitAI

    Hello ! I have two issues with my flux on forge stable diffusion. When i try to use flux for generate images, i have "Error connection errored out" or "OutOfMemoryError: CUDA out of memory. Tried to allocate 160.00 MiB. GPU". I have a Nvidia RTX3060 with 12Gb of Vram and 16Gb of Ram. Does anyone know the problem come from ? Sorry if i make mistake, i come from oui oui baguette land

    condzero1950Nov 20, 2024· 1 reaction

    Both errors are due to running out of available VRAM GPU memory.

    onimessatsuNov 21, 2024

    @condzero1950 and do i have a solution or something for using it ?

    minimishkaNov 21, 2024· 1 reaction

    @onimessatsu You can increase the windows swap file or add RAM or upgrade the GPU or use a different model

    onimessatsuNov 22, 2024

    @minimishka ok thx for the advice

    Marshall66Nov 21, 2024· 4 reactions
    CivitAI

    Is Flux Pruned Model fp32 (22.15 GB) a dev model or schnell model?

    Checkpoint
    Flux.1 D

    Details

    Downloads
    15,607
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/4/2024
    Updated
    5/14/2026
    Deleted
    -