Like the work I do and want to say thanks? Buy me a coffee or Support me on Patreon for exclusive early access to my models and more!
For version specific notes and settings, look under "About this version" --->
What a time to be alive! I created this model by block merging my low weight LoRA trainings over multiple passes (very similar to how I created my SDXL series models) to the base flux.d model. The result is a model that can do basic NSFW generations including proper female anatomy and concepts. Total training was about 5k images spread across SFW cinematic stills, art photography, LaION art-pop, about 1500ish explicit and artful nudes about 80% photography, 20% AI/illustrative nudes. The model responds well to prompts just like base-flux does. This is a WIP and only a V1, I will be tuning this model more as I identify weaknesses in the output and methods to improve the quality. This model was built on top of the flux.1_dev_8x8_e4m3fn-marduk191 version, so fp8 quality, though I have included the full FP16 clip-l and T5 models, as I don't like the quality drop off with FP8 T5 clip. If there is demand for an fp8/fp8 version, I can make one available.
Description
FAQ
Comments (98)
HyFU Release Notes v1.0.0
This is HyFU, short for "Hybrid Flux Unchained" - This is a 5 way custom block-by-block combination of base Flux-dev, base Flux-schnell, Flux Unchained v1, Flux Unchained V2 (coming soon), and SchnFU v1, and actively uses parts and pieces (via weights and biases) of all 5 models. What this has created is a model that is high quality with the coherence and artistic flair of Dev (depending on the sampler you use, more on that in a sec), near the speed of Schnell, with my own custom fine tuning style that you guys have already gotten to know with either my FU/SchnFU models, or my previous XL/1.5 model series releases (NightVision, DynaVision, ProtoVision, Cinevision).
The best results I've found so far:
If you want more Schnell styled output:
Sampler: euler
cfg: 1.0
steps: 8
Scheduler: normal, simple or beta tend to be most reliable, you can get more detail with AYS and AYS_30 schedulers, but I've noticed text tends to take a hit with these guys
If you want more Dev styled output:
Sampler: lcm
cfg: 0.8
steps: 8
Scheduler: ays_30+ and beta give the best most detailed and creative outputs, though you can use normal or simple if you have to. I tend to see more errors with this sampler using normal and simple, and they both tend to have less detail, but it still captures the dev style better than euler does.
A few other testing observations worth sharing with this model that makes it a bit different than either dev or schnell. First, guidance does absolutely nothing with this model. Guidance of 1.0 to guidance of 50, you're going to get the same output.
This model is very very sensitive to CFG changes. for the LCM sampler, I use 0.8, but I've found it's stable in the 0.7 to 1.0 range. past 1 it falls apart fast (and doubles in generation time too). With the other samplers, going past 1 makes stuff get blurry. you may have some luck pushing cfg as high as maybe 1.2, but past that stuff starts to decohere and lose quality fast.
I use a max_shift of 1.0 and a base_shift of 0.2, however I have testers that are running it in the 1.5/0.5 range and getting good results as well, play with it and see if you get better output.
Finally, this model falls under the non-commercial Flux.Dev license. Even though it is a hybrid of both schnell and Dev, the more restrictive dev licensing takes precedence.
If you want guidance to work you need dev's double blocks: https://imgur.com/a/Igb8608
The first few single blocks from dev, like 2-8 can make it more "dev" like (more detail) but 4 step images start getting worse and worse and you keeping having to up the steps. There has to be some golden ratio for those single blocks, but I haven't found it yet. Putting schnell stuff into the double blocks worsens text and breaks guidance. From playing with this, dev also seems to have a lot more bokeh.
Also, for anyone else, don't merge on quantized weights. Do it on CPU if you have to.
@Gore_Man聽hey, neat breakdown, and fits with my own observations (the guidance thing I didn't realize until after the model was already finished, now I know why, thx 馃憤馃徎).
@socalguitarist聽I'm trying to see if I can just extract the double blocks and make them into a lora. Maybe it will work. Then it might be possible to put it back without fuss.
Hey, what VAE do you recommend for this?
@Suppressor聽The standard Flux VAE, don't know if there are any others out there yet.
@socalguitarist聽Ah, OK. Somehow I picked up a Schnell VAE and a Dev VAE.
Is there any special clips that need to be used with this?
I keep getting and error in swarmui:
2024-09-04 12:11:04.475 [Error] [BackendHandler] backend #0 failed to load model with error: ComfyUI execution error: The text encoders (CLIP) failed to load
2024-09-04 12:11:04.477 [Warning] [BackendHandler] backend #0 failed to load model Flux/Flux Unchained by SCG - HyFU-8-Step-Hybrid-v1-0.safetensors
All my other Flux models load fine?
please no fast step models ;)
unstable/unflexible, less schedulers, lora-compatbility?!?
Give HyFU a try before you complain 馃槈 - it's neither Schnell nor Dev, it's something else. it's fast, looks good, works fine with loras, and outside of DEIS (which gets a bit blurry) seems to work fine with all the same samplers. It's got a lower error rate than Schnell, and it's got the stylistic bits of FU that folks like. (and yeah, it has boobs too)
may ill give it a try ... ;)
what i tested (not on your models) one step les and all images anime ... 2 steps more and all overexposed ...
loras trained on civitai much better with
Downloaded the model and checked on the standard COMFYUI workflow, either I'm doing something wrong or the model produces cartoon graphics. There are more realistic models. But in any case the author deserves respect for his efforts!
Play with sampler and scheduler settings, I'll freely admit I haven't really found the very best settings for it yet. I've found at least for some prompts I can get more realism using dpm_2/simple or lcm/beta (tho that tends to still be more "artsy")
@socalguitarist聽Okay, thanks, I'll give it a try!
Quick Update - Doing more HyFU testing with folks on my Discord channel and we've found you can get really great results with LCM sampler and BETA scheduler at 1.0 CFG for only 4 steps! Try it yourself!
Definitely does not work for me. Using LCM always results in a weird tiled white block image. This does not happen with any other sampler I've attempted with. Using this, the official flux vae, T5xxl_fp8 (also tried 16), and clip_l. I'm using A1111 on a 4080, and HyFU-8-Step-Hybrid-v1.0.
Only DPM2 seems to work for me. Another issue I've noticed is I can generate an image in less than 30-40 seconds, but the second I add a FLUX compatible LORA, it upshoots to around 3-5 minutes, even at 4-8 steps, and completely hardlocks my computer during that time. It also doesn't seem to take the LORA into consideration in the generation, as I just waited for my computer to stop hardlocking for almost 7 minutes only to find out it generated a generic person and not the person from the LORA, even with all the trigger words.
Any ideas? I really can't figure out what's going wrong here or what's at fault.
@GoblinC聽Blech yuck, that's no good. Pop on by the discord, we can try some different settings.
@socalguitarist聽With the HyFU model I can get a good 1024x1600 image in 2 steps with LCM/Beta. The tones are a little bit soft but it comes out great after upscaling 1.25x.
Same issue here that GoblinC is experiencing using these settings in Forge. I also tried using this with one of my loras using Euler/Simple and the ouput looks nothing like the character. I'll keep playing around with settings.
I tried it but it just gave me a gray static image as a result. Still cool using it with 8 steps though! It's awesome stuff.
same issue as @GoblinC
@GoblinC聽quite a bit late to the party here, but A1111 and forge do token weight normalization that may be steering your flux generations poorly. Try turning off in the A1111 settings and see if that make any difference. Although hopefully by now, you've found a fix or workaround.
It doesnt work on ForgeUI "You do not have CLIP state dict!"
It works, you just need some things in your vae folder, https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main ae.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors. put them all into your VAE folder and load the 3 of them, as a vae/text encorder and it should work,
@Nilheiven聽perfect. thanks
@Nilheiven聽when i try to download the ae.safetensor it wont let me acces it. so i tried to find it somewhere else: https://civitai.com/models/619150/flux1-dev-vae but i see a difference in file size. Not sure what i can do best now. can you point me in the right direction please?
@Nilheiven聽when i try to download the ae.safetensor it wont let me acces it. so i tried to find it somewhere else: https://civitai.com/models/619150/flux1-dev-vae but i see a difference in file size. Not sure what i can do best now. can you point me in the right direction please?
@yessy_boem聽Still seems to work fine for me, if you're having trouble might be worth trying something like "wget https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/ae.safetensors?download=true" then removing the "?download=true" at the end of it.
I tried this, and I got past the dictionary errors from these comments, but now I get: ValueError: Failed to recognize model type!
for those with access errors downloading ae.safetensors. https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors That worked for me.
For some reason, in Forge, my output at the end becomes just a solid gray PNG. I have no idea why.
Nevermind, I was using a wrong VAE
Hi! Can you please point me to the correct ComfyUI workflow to use the HyFU model?
this is what i'm getting if i use LCM "AttributeError: 'LCMCompVisDenoiser' object has no attribute 'predictor'" am using forge ui for context
does it work with Automatic 1111, or only Comfy UI?
Better use it in Forge, the better A1111.
ComfyUI is worlds better and actually much easier to use once you get used to the modular setup. You can take someone's photo here and load it into forge and it'll give you their workflow for example.
Use the ComfyUI Manager and it'll make things so, so, so much easier. I feel like ComfyUI and the the manager should be packaged at this point.
It's also A LOT faster. I'm afraid Flux would crawl using Automatic. Another thing ComfyUI does better by a large margin is memory management and overall system resource allocation. I never had any system hitches or slowdowns of any kind using Comfy and that's when rendering things much larger and using a buttload of models and nodes (controlnet, instaID, etc.) at the same time. IN Automatic1111 this isn't even an option. Your computer would freeze up. Also, you can ad an inpainting step to the workflow and not have to manuall move things around.
It's just better. Forge is another alternative to A1111 if you're still on the fence about Comfy.
God thinks there should be an FP16 version of this model. Then again I know that not all of us have unlimited power but I definitely do.
based
Is there a way to remove the background blur depth of field bokeh effect from photos without negative prompts? I find the effect annoying in base flux1, and the current methods for negative prompts slow down speed by a huge amount.
No, been trying to tame it. Avoid using high end photography terms (especially terms like "DOF and bokeh") as that tends to spike it. 'Cell phone camera, flat focus, wide angle' - you can try those, they may help.
There's a anti-blur lora
Beginner Question: I guess none of these model versions work for 8 GB VRAM users in Forge at the moment, correct? Had no success so far.
Yes! you can use NF4 on Forge or the GGUF versions on Civitai if you want speed
I run it on 6gb and I've seen people generating on 3gb
@unmystic聽I want to use the 4/8 step models from this model page.
For example: I tried to run "fluxUnchainedBySCG_hyfu8StepHybridV10" in Forge in "bnb-nf4" mode I get this error "AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!".
I tried to assign "clip_l" in the text_encoder field but then I always get full bluescreen and PC needs to reboot...
I tested also other models/settings, no luck yet. Any help how to use the 4/8 Step models would be welcome!
Edit: Ok I had to use ae.safetensors, clip_l.safetensors and t5xxl_fp16.safetensors together. Now it worked :)
does the hybrid 8-step render less body warping than the schnell 4-step version does? it really can't handle poses beyond portraits
Do you have any work follow for this ? Thanks
Nf4 version?
Please create an FP16 version. Thank you
Please add GGUF versions. Thank you
RTX 12GB 4070. 24GB RAM PC DDR, be taken over 5 minutes to complete. it should be optimized more. Thank you
How's the NSFW Males? All flux models so far seem heavy female focus, which I'm not trying to tell people to change, but would be nice to find a model that is trained with both.
Nice Model ... but ...
Anyone know why this model doesn't seem to use my GPU? Checking task manager my CUDA usage is only 20% (as opposed to 90% for other models I use) - which seems to slow it right down to a crawl. 4 steps are great - but these 4 steps are very slow steps - 4 steps is taking me takes over 1 minute - I have a 3060ti (8gb) and 32 gb RAM.
Any advice would be helpful
"If there is interest, I can release another 'full' version that packages all of that into a single safetensor, just let me know here in comments if you need it." Yes please!!
Flux Unchained V2 coming soon?
I prefer quality over speed - which one should i choose?
I have 8GB vram, but it doesn't matter. I need quality, accuracy and creativity. Also, i dont really do NSWF and sexual stuff. Which one would you recommend? :)
If you have a 30/40 graphics card, then the basic FLUX 1d NF4 model is perfect for you.
Why is the Hybrid Base model set to Flud.D when its Flux.S?
is this made in flux.d? i get worse quality with this model than the standard flux1d model
yes
last Flux version works great in Ruined Fooocus 1.56 ! thanks a lot
hi, where have you found Ruined Fooocus 1.56?
In which folder should I put the file when using comfyUI? Noob alert btw :)
unets
Anyone able to tell me what clip and t5 model to use? I keep getting an error loading CLIP and can only think I have an incorrect model.
I use "fluxUnchainedBySCG_hyfu8StepHybridV10.safetensors" (first one) only for model, with "t5xxl_fp8_e4m3fn.safetensors" for t5 and "ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors" for clipl. Euler-simple (beta is a bit better) and 8 step works great.
The biggest problem with the model is I can't control the height, breasts size and weight of the subject.
The prompt: Starving woman 18 yo with small breasts surviving on a post-apocalyptic world.
The woman generated is too clean and her breasts are on the big fat size...
This is a problem with all FLUX models, possibly due to the original Data Set or training conditions of the original FLUX 1 dev model.
Are we getting a v2 anytime soon ? This gives acceptable stuff only @ 8 steps and using my workflow you can see the results. Normal flux1.d vs this one , this takes 25% less time on 20 steps on normal flux1.d vs my workflow and looks better at the same seed & with less steps overall.
what are differences between all the Flux models above...sorry if it sounds stupid
When you select the model, on the right side of the screen there is a box "About this version". There you can read what the selected model is about.
I get an error when I try to use the LCM sampler, anyone knows why?
AttributeError: 'LCMCompVisDenoiser' object has no attribute 'predictor'
'LCMCompVisDenoiser' object has no attribute 'predictor'
Have you considered uploading NF4 versions, or GGUF? (I would be happy to do the conversions and provide the files.)
Any use of this model makes my computer unusable with frequent freezes until I kill python. I've never seen it's like. I am assuming the problem is on my end, but it's the only model (of about a 100 different ones) that causes this. Using InvokeAi with a mid/high-end configuration (RTX 4070 super, Ryzen 5900X)
I'd recommend using one of the Q5 models of the same, https://civitai.com/models/662112?modelVersionId=748187 not sure if it's memory related or not, but I have 12 GB of vram as well and whenever I try to run a non-quantized flux model it also tends to bring my computer to its knees but the quantized ones at least work for the most part for me.
Not working with ComfyUI and also Draw Things.
SchnFu (schnell) works fine in Draw Things. I've seen others use the dev version(s) too
The 8 step Hybrid works fine in ComfyUI, except for its occasional quirk of spitting out nearly black and white, grainy images. Don't let that put you off if you get one early, it might do that one time in twenty on a longer run. I seem to recall the original Flux Schnell doing that to me at times as well.
As for female anatomy, the breasts I see posted look OK, but the nipples etc look rather fake and too similar.
Fortunately the 8 step model works just fine with a lot of LoRAs (can't say I've tested them all of course) so this is a solvable problem. Another commenter claimed LoRAs don't work well with this model but I have not had any difficulties at all, other than the usual trial and error of getting the weights and order correct. I don't stack more than four LoRAs, and usually stick to three at most as they start to interact otherwise, but that is not a problem specific to this model. It's just normal behavior as best I can tell.
I'll get back to you regarding the 4 step model.
Can you tell me what tools were used for training?
Hi mate! Thanks for the AMAZING work!
Can you provide all "all in one"version with embedded clip and T5?
Trained on different people, genders and ages? Or just pretty young women?
Shut up....
It understands males as well, although you may still want to use a LoRA to force a particular look to male genitals. But if you're asking if it tries to put boobs on guys (as some models do), the answer is "almost no". It has happened once or twice out of hundreds of generations for me.
only pretty young women matter.
sry LORAS dont work well, they are over-exposed and no depth
use thise one
https://civitai.com/models/622579/flux1-dev-fp8?modelVersionId=695998
Really? I've been having no trouble with LoRAs, even three of them stacked seems to work perfectly reasonably, at least with the 8 step Hybrid.
Doesn't work, I keep getting this Error.
You do not have CLIP state dict!
Had the same error today when I was trying the model out. I solved it by using three vae files that I found by googling. No idea if I am doing it right, but I am using ae.safetensors, clip_I.safetensors and t5xxl_fp16.safetensors as the vae
@zznake聽where do you put ae.safetensors? I assume clip_l goes in models/CLIP folder and t5xxl_fp16 goes in models/VAE?
@MetaGen聽ae.safetensors goes in the VAE folder. t5xxl and clip go into the text encoder folder
What should I type in models.yaml in order to add this model to my FluxGym training models?
And do you have recommendations for steps and such? To create a character lora.
Hey, I'm new to this website, can I not use this model to generate a image right here with the website generator? I can't select the model. There's only one flux model.
No one responded to this and it has been 9 months, however I will put the answer in case anyone is curious. CivitAI now only allows a limited number of models to be used for generation, so if this model isn't in the list, you can't generate from it in the website.
There is this new quantization technique called svdquant nunchaku, which is smaller is size and very fast, about 3x times faster. Any plans to release your unchained model in this svdquant format.
Details
Files
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.



















