CivArchive
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    Like the work I do and want to say thanks? Buy me a coffee or Support me on Patreon for exclusive early access to my models and more!

    Join us on SCG-Playground where we have fun contests, discuss model and prompt creation, AI news and share our art to our hearts content in THE FLOOD! 馃挅馃挅馃挅


    For version specific notes and settings, look under "About this version" --->

    What a time to be alive! I created this model by block merging my low weight LoRA trainings over multiple passes (very similar to how I created my SDXL series models) to the base flux.d model. The result is a model that can do basic NSFW generations including proper female anatomy and concepts. Total training was about 5k images spread across SFW cinematic stills, art photography, LaION art-pop, about 1500ish explicit and artful nudes about 80% photography, 20% AI/illustrative nudes. The model responds well to prompts just like base-flux does. This is a WIP and only a V1, I will be tuning this model more as I identify weaknesses in the output and methods to improve the quality. This model was built on top of the flux.1_dev_8x8_e4m3fn-marduk191 version, so fp8 quality, though I have included the full FP16 clip-l and T5 models, as I don't like the quality drop off with FP8 T5 clip. If there is demand for an fp8/fp8 version, I can make one available.

    Description

    null

    FAQ

    Comments (98)

    socalguitarist
    Author
    Aug 25, 20247 reactions
    CivitAI

    HyFU Release Notes v1.0.0

    This is HyFU, short for "Hybrid Flux Unchained" - This is a 5 way custom block-by-block combination of base Flux-dev, base Flux-schnell, Flux Unchained v1, Flux Unchained V2 (coming soon), and SchnFU v1, and actively uses parts and pieces (via weights and biases) of all 5 models. What this has created is a model that is high quality with the coherence and artistic flair of Dev (depending on the sampler you use, more on that in a sec), near the speed of Schnell, with my own custom fine tuning style that you guys have already gotten to know with either my FU/SchnFU models, or my previous XL/1.5 model series releases (NightVision, DynaVision, ProtoVision, Cinevision).

    The best results I've found so far:

    If you want more Schnell styled output:

    Sampler: euler

    cfg: 1.0

    steps: 8

    Scheduler: normal, simple or beta tend to be most reliable, you can get more detail with AYS and AYS_30 schedulers, but I've noticed text tends to take a hit with these guys

    If you want more Dev styled output:

    Sampler: lcm

    cfg: 0.8

    steps: 8

    Scheduler: ays_30+ and beta give the best most detailed and creative outputs, though you can use normal or simple if you have to. I tend to see more errors with this sampler using normal and simple, and they both tend to have less detail, but it still captures the dev style better than euler does.

    A few other testing observations worth sharing with this model that makes it a bit different than either dev or schnell. First, guidance does absolutely nothing with this model. Guidance of 1.0 to guidance of 50, you're going to get the same output.

    This model is very very sensitive to CFG changes. for the LCM sampler, I use 0.8, but I've found it's stable in the 0.7 to 1.0 range. past 1 it falls apart fast (and doubles in generation time too). With the other samplers, going past 1 makes stuff get blurry. you may have some luck pushing cfg as high as maybe 1.2, but past that stuff starts to decohere and lose quality fast.

    I use a max_shift of 1.0 and a base_shift of 0.2, however I have testers that are running it in the 1.5/0.5 range and getting good results as well, play with it and see if you get better output.

    Finally, this model falls under the non-commercial Flux.Dev license. Even though it is a hybrid of both schnell and Dev, the more restrictive dev licensing takes precedence.

    Gore_ManAug 25, 20241 reaction

    If you want guidance to work you need dev's double blocks: https://imgur.com/a/Igb8608


    The first few single blocks from dev, like 2-8 can make it more "dev" like (more detail) but 4 step images start getting worse and worse and you keeping having to up the steps. There has to be some golden ratio for those single blocks, but I haven't found it yet. Putting schnell stuff into the double blocks worsens text and breaks guidance. From playing with this, dev also seems to have a lot more bokeh.

    Also, for anyone else, don't merge on quantized weights. Do it on CPU if you have to.

    socalguitarist
    Author
    Aug 25, 20241 reaction

    @Gore_Man聽hey, neat breakdown, and fits with my own observations (the guidance thing I didn't realize until after the model was already finished, now I know why, thx 馃憤馃徎).

    Gore_ManAug 25, 20241 reaction

    @socalguitarist聽I'm trying to see if I can just extract the double blocks and make them into a lora. Maybe it will work. Then it might be possible to put it back without fuss.

    2770379Aug 26, 20241 reaction

    Hey, what VAE do you recommend for this?

    socalguitarist
    Author
    Aug 26, 20241 reaction

    @Suppressor聽The standard Flux VAE, don't know if there are any others out there yet.

    2770379Aug 26, 2024

    @socalguitarist聽Ah, OK. Somehow I picked up a Schnell VAE and a Dev VAE.

    haqthatSep 4, 2024

    Is there any special clips that need to be used with this?
    I keep getting and error in swarmui:
    2024-09-04 12:11:04.475 [Error] [BackendHandler] backend #0 failed to load model with error: ComfyUI execution error: The text encoders (CLIP) failed to load

    2024-09-04 12:11:04.477 [Warning] [BackendHandler] backend #0 failed to load model Flux/Flux Unchained by SCG - HyFU-8-Step-Hybrid-v1-0.safetensors

    All my other Flux models load fine?

    sevenof9247Aug 25, 20245 reactions
    CivitAI

    please no fast step models ;)
    unstable/unflexible, less schedulers, lora-compatbility?!?

    socalguitarist
    Author
    Aug 25, 2024

    Give HyFU a try before you complain 馃槈 - it's neither Schnell nor Dev, it's something else. it's fast, looks good, works fine with loras, and outside of DEIS (which gets a bit blurry) seems to work fine with all the same samplers. It's got a lower error rate than Schnell, and it's got the stylistic bits of FU that folks like. (and yeah, it has boobs too)

    sevenof9247Aug 26, 20241 reaction

    may ill give it a try ... ;)

    what i tested (not on your models) one step les and all images anime ... 2 steps more and all overexposed ...

    sevenof9247Aug 26, 2024

    loras trained on civitai much better with

    https://civitai.com/models/638187?modelVersionId=721627

    alexoterinAug 25, 20241 reaction
    CivitAI

    Downloaded the model and checked on the standard COMFYUI workflow, either I'm doing something wrong or the model produces cartoon graphics. There are more realistic models. But in any case the author deserves respect for his efforts!

    socalguitarist
    Author
    Aug 25, 20241 reaction

    Play with sampler and scheduler settings, I'll freely admit I haven't really found the very best settings for it yet. I've found at least for some prompts I can get more realism using dpm_2/simple or lcm/beta (tho that tends to still be more "artsy")

    alexoterinAug 26, 2024

    @socalguitarist聽Okay, thanks, I'll give it a try!

    socalguitarist
    Author
    Aug 25, 202412 reactions
    CivitAI

    Quick Update - Doing more HyFU testing with folks on my Discord channel and we've found you can get really great results with LCM sampler and BETA scheduler at 1.0 CFG for only 4 steps! Try it yourself!

    GoblinCAug 26, 2024

    Definitely does not work for me. Using LCM always results in a weird tiled white block image. This does not happen with any other sampler I've attempted with. Using this, the official flux vae, T5xxl_fp8 (also tried 16), and clip_l. I'm using A1111 on a 4080, and HyFU-8-Step-Hybrid-v1.0.

    Only DPM2 seems to work for me. Another issue I've noticed is I can generate an image in less than 30-40 seconds, but the second I add a FLUX compatible LORA, it upshoots to around 3-5 minutes, even at 4-8 steps, and completely hardlocks my computer during that time. It also doesn't seem to take the LORA into consideration in the generation, as I just waited for my computer to stop hardlocking for almost 7 minutes only to find out it generated a generic person and not the person from the LORA, even with all the trigger words.

    Any ideas? I really can't figure out what's going wrong here or what's at fault.

    socalguitarist
    Author
    Aug 26, 2024

    @GoblinC聽Blech yuck, that's no good. Pop on by the discord, we can try some different settings.

    GrumblebuttAug 26, 20242 reactions

    @socalguitarist聽With the HyFU model I can get a good 1024x1600 image in 2 steps with LCM/Beta. The tones are a little bit soft but it comes out great after upscaling 1.25x.

    2367652Aug 26, 2024

    Same issue here that GoblinC is experiencing using these settings in Forge. I also tried using this with one of my loras using Euler/Simple and the ouput looks nothing like the character. I'll keep playing around with settings.

    BIGR3DAug 26, 2024

    I tried it but it just gave me a gray static image as a result. Still cool using it with 8 steps though! It's awesome stuff.

    throwawayyy111Aug 26, 2024

    same issue as @GoblinC

    schonsenseSep 7, 2024

    @GoblinC聽quite a bit late to the party here, but A1111 and forge do token weight normalization that may be steering your flux generations poorly. Try turning off in the A1111 settings and see if that make any difference. Although hopefully by now, you've found a fix or workaround.

    apo1sAug 26, 20244 reactions
    CivitAI

    It doesnt work on ForgeUI "You do not have CLIP state dict!"

    NoppaAug 26, 202420 reactions

    It works, you just need some things in your vae folder, https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main ae.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors. put them all into your VAE folder and load the 3 of them, as a vae/text encorder and it should work,

    jmkiiiAug 26, 2024

    @Nilheiven聽perfect. thanks

    yessy_boemAug 31, 2024

    @Nilheiven聽when i try to download the ae.safetensor it wont let me acces it. so i tried to find it somewhere else: https://civitai.com/models/619150/flux1-dev-vae but i see a difference in file size. Not sure what i can do best now. can you point me in the right direction please?

    yessy_boemAug 31, 2024

    @Nilheiven聽when i try to download the ae.safetensor it wont let me acces it. so i tried to find it somewhere else: https://civitai.com/models/619150/flux1-dev-vae but i see a difference in file size. Not sure what i can do best now. can you point me in the right direction please?

    NoppaAug 31, 2024

    @yessy_boem聽Still seems to work fine for me, if you're having trouble might be worth trying something like "wget https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/ae.safetensors?download=true" then removing the "?download=true" at the end of it.

    I tried this, and I got past the dictionary errors from these comments, but now I get: ValueError: Failed to recognize model type!

    MrReclusive666Sep 6, 2024

    for those with access errors downloading ae.safetensors. https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors That worked for me.

    gabrielxAug 26, 20242 reactions
    CivitAI

    For some reason, in Forge, my output at the end becomes just a solid gray PNG. I have no idea why.

    gabrielxAug 27, 2024

    Nevermind, I was using a wrong VAE

    ViratXAug 27, 202412 reactions
    CivitAI

    Hi! Can you please point me to the correct ComfyUI workflow to use the HyFU model?

    cruzlightAug 27, 20242 reactions
    CivitAI

    this is what i'm getting if i use LCM "AttributeError: 'LCMCompVisDenoiser' object has no attribute 'predictor'" am using forge ui for context

    QuinceyForderAug 27, 20245 reactions
    CivitAI

    does it work with Automatic 1111, or only Comfy UI?

    SubtleShaderAug 30, 20243 reactions

    Better use it in Forge, the better A1111.

    LetTheBassDropSep 1, 20244 reactions

    ComfyUI is worlds better and actually much easier to use once you get used to the modular setup. You can take someone's photo here and load it into forge and it'll give you their workflow for example.

    Use the ComfyUI Manager and it'll make things so, so, so much easier. I feel like ComfyUI and the the manager should be packaged at this point.

    It's also A LOT faster. I'm afraid Flux would crawl using Automatic. Another thing ComfyUI does better by a large margin is memory management and overall system resource allocation. I never had any system hitches or slowdowns of any kind using Comfy and that's when rendering things much larger and using a buttload of models and nodes (controlnet, instaID, etc.) at the same time. IN Automatic1111 this isn't even an option. Your computer would freeze up. Also, you can ad an inpainting step to the workflow and not have to manuall move things around.

    It's just better. Forge is another alternative to A1111 if you're still on the fence about Comfy.

    GodAlMightyAug 28, 202416 reactions
    CivitAI

    God thinks there should be an FP16 version of this model. Then again I know that not all of us have unlimited power but I definitely do.

    RedPinkRetroAug 29, 2024

    based

    Zap9Aug 28, 20242 reactions
    CivitAI

    Is there a way to remove the background blur depth of field bokeh effect from photos without negative prompts? I find the effect annoying in base flux1, and the current methods for negative prompts slow down speed by a huge amount.

    socalguitarist
    Author
    Aug 29, 20241 reaction

    No, been trying to tame it. Avoid using high end photography terms (especially terms like "DOF and bokeh") as that tends to spike it. 'Cell phone camera, flat focus, wide angle' - you can try those, they may help.

    Tim_DSep 12, 2024

    There's a anti-blur lora

    jauri134Aug 31, 20243 reactions
    CivitAI

    Beginner Question: I guess none of these model versions work for 8 GB VRAM users in Forge at the moment, correct? Had no success so far.

    unmysticAug 31, 20241 reaction

    Yes! you can use NF4 on Forge or the GGUF versions on Civitai if you want speed

    unmysticAug 31, 20241 reaction

    I run it on 6gb and I've seen people generating on 3gb

    jauri134Sep 1, 20242 reactions

    @unmystic聽I want to use the 4/8 step models from this model page.

    For example: I tried to run "fluxUnchainedBySCG_hyfu8StepHybridV10" in Forge in "bnb-nf4" mode I get this error "AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!".

    I tried to assign "clip_l" in the text_encoder field but then I always get full bluescreen and PC needs to reboot...

    I tested also other models/settings, no luck yet. Any help how to use the 4/8 Step models would be welcome!

    Edit: Ok I had to use ae.safetensors, clip_l.safetensors and t5xxl_fp16.safetensors together. Now it worked :)

    NarzSep 2, 20242 reactions
    CivitAI

    does the hybrid 8-step render less body warping than the schnell 4-step version does? it really can't handle poses beyond portraits

    nghiavu2011463Sep 2, 20245 reactions
    CivitAI

    Do you have any work follow for this ? Thanks

    Bot_1942Sep 2, 20247 reactions
    CivitAI

    Nf4 version?

    WardenscSep 4, 20245 reactions
    CivitAI

    Please create an FP16 version. Thank you

    jr81Sep 4, 202411 reactions
    CivitAI

    Please add GGUF versions. Thank you

    jastranloveSep 6, 20246 reactions
    CivitAI

    RTX 12GB 4070. 24GB RAM PC DDR, be taken over 5 minutes to complete. it should be optimized more. Thank you

    DrHojo123Sep 8, 20242 reactions
    CivitAI

    How's the NSFW Males? All flux models so far seem heavy female focus, which I'm not trying to tell people to change, but would be nice to find a model that is trained with both.

    soulatmanSep 9, 2024
    CivitAI

    Nice Model ... but ...

    Anyone know why this model doesn't seem to use my GPU? Checking task manager my CUDA usage is only 20% (as opposed to 90% for other models I use) - which seems to slow it right down to a crawl. 4 steps are great - but these 4 steps are very slow steps - 4 steps is taking me takes over 1 minute - I have a 3060ti (8gb) and 32 gb RAM.

    Any advice would be helpful

    maladarkeSep 10, 20246 reactions
    CivitAI

    "If there is interest, I can release another 'full' version that packages all of that into a single safetensor, just let me know here in comments if you need it." Yes please!!

    jeriffSep 11, 20243 reactions
    CivitAI

    Flux Unchained V2 coming soon?

    kapec512Sep 11, 2024
    CivitAI

    I prefer quality over speed - which one should i choose?

    I have 8GB vram, but it doesn't matter. I need quality, accuracy and creativity. Also, i dont really do NSWF and sexual stuff. Which one would you recommend? :)

    vanyk2000449Oct 12, 2024

    If you have a 30/40 graphics card, then the basic FLUX 1d NF4 model is perfect for you.

    nisse91Sep 12, 20243 reactions
    CivitAI

    Why is the Hybrid Base model set to Flud.D when its Flux.S?

    Diffusion_4_The_PeopleSep 13, 20242 reactions
    CivitAI

    is this made in flux.d? i get worse quality with this model than the standard flux1d model

    snake2019Sep 22, 2024

    yes

    schtroumpfySep 14, 20243 reactions
    CivitAI

    last Flux version works great in Ruined Fooocus 1.56 ! thanks a lot

    vicentoSep 23, 2024

    hi, where have you found Ruined Fooocus 1.56?

    RecordaboiSep 15, 2024
    CivitAI

    In which folder should I put the file when using comfyUI? Noob alert btw :)

    WvizerOct 18, 2024

    unets

    lilwashuu10371Sep 17, 20242 reactions
    CivitAI

    Anyone able to tell me what clip and t5 model to use? I keep getting an error loading CLIP and can only think I have an incorrect model.

    xpnrtSep 18, 20242 reactions

    I use "fluxUnchainedBySCG_hyfu8StepHybridV10.safetensors" (first one) only for model, with "t5xxl_fp8_e4m3fn.safetensors" for t5 and "ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors" for clipl. Euler-simple (beta is a bit better) and 8 step works great.

    cruelloSep 22, 2024
    CivitAI

    The biggest problem with the model is I can't control the height, breasts size and weight of the subject.

    The prompt: Starving woman 18 yo with small breasts surviving on a post-apocalyptic world.

    The woman generated is too clean and her breasts are on the big fat size...

    vanyk2000449Oct 12, 2024

    This is a problem with all FLUX models, possibly due to the original Data Set or training conditions of the original FLUX 1 dev model.

    xpnrtSep 22, 2024
    CivitAI

    Are we getting a v2 anytime soon ? This gives acceptable stuff only @ 8 steps and using my workflow you can see the results. Normal flux1.d vs this one , this takes 25% less time on 20 steps on normal flux1.d vs my workflow and looks better at the same seed & with less steps overall.

    PinkywDreamsSep 27, 2024
    CivitAI

    what are differences between all the Flux models above...sorry if it sounds stupid

    german_psychoSep 29, 20242 reactions

    When you select the model, on the right side of the screen there is a box "About this version". There you can read what the selected model is about.

    svoooooshOct 5, 2024
    CivitAI

    I get an error when I try to use the LCM sampler, anyone knows why?
    AttributeError: 'LCMCompVisDenoiser' object has no attribute 'predictor'

    'LCMCompVisDenoiser' object has no attribute 'predictor'

    moxie1776Oct 11, 20246 reactions
    CivitAI

    Have you considered uploading NF4 versions, or GGUF? (I would be happy to do the conversions and provide the files.)

    darionqOct 26, 20241 reaction
    CivitAI

    Any use of this model makes my computer unusable with frequent freezes until I kill python. I've never seen it's like. I am assuming the problem is on my end, but it's the only model (of about a 100 different ones) that causes this. Using InvokeAi with a mid/high-end configuration (RTX 4070 super, Ryzen 5900X)

    DranKofJan 6, 2025

    I'd recommend using one of the Q5 models of the same, https://civitai.com/models/662112?modelVersionId=748187 not sure if it's memory related or not, but I have 12 GB of vram as well and whenever I try to run a non-quantized flux model it also tends to bring my computer to its knees but the quantized ones at least work for the most part for me.

    DelightofVisionOct 29, 2024
    CivitAI

    Not working with ComfyUI and also Draw Things.

    NarzDec 7, 2024

    SchnFu (schnell) works fine in Draw Things. I've seen others use the dev version(s) too

    lesjoDec 9, 2024

    The 8 step Hybrid works fine in ComfyUI, except for its occasional quirk of spitting out nearly black and white, grainy images. Don't let that put you off if you get one early, it might do that one time in twenty on a longer run. I seem to recall the original Flux Schnell doing that to me at times as well.

    condzero1950Nov 2, 2024
    CivitAI

    As for female anatomy, the breasts I see posted look OK, but the nipples etc look rather fake and too similar.

    lesjoDec 9, 2024

    Fortunately the 8 step model works just fine with a lot of LoRAs (can't say I've tested them all of course) so this is a solvable problem. Another commenter claimed LoRAs don't work well with this model but I have not had any difficulties at all, other than the usual trial and error of getting the weights and order correct. I don't stack more than four LoRAs, and usually stick to three at most as they start to interact otherwise, but that is not a problem specific to this model. It's just normal behavior as best I can tell.

    I'll get back to you regarding the 4 step model.

    zhenlajisileNov 3, 2024
    CivitAI

    Can you tell me what tools were used for training?

    neisanawsaibNov 11, 20248 reactions
    CivitAI

    Hi mate! Thanks for the AMAZING work!
    Can you provide all "all in one"version with embedded clip and T5?

    t6589517691Dec 1, 20248 reactions
    CivitAI

    Trained on different people, genders and ages? Or just pretty young women?

    FilipeRDec 1, 202420 reactions

    Shut up....

    lesjoDec 9, 2024

    It understands males as well, although you may still want to use a LoRA to force a particular look to male genitals. But if you're asking if it tries to put boobs on guys (as some models do), the answer is "almost no". It has happened once or twice out of hundreds of generations for me.

    LATT_loveApr 14, 20256 reactions

    only pretty young women matter.

    sevenof9247Dec 7, 20249 reactions
    CivitAI

    sry LORAS dont work well, they are over-exposed and no depth

    use thise one

    https://civitai.com/models/622579/flux1-dev-fp8?modelVersionId=695998

    lesjoDec 9, 20241 reaction

    Really? I've been having no trouble with LoRAs, even three of them stacked seems to work perfectly reasonably, at least with the 8 step Hybrid.

    junior2050Dec 14, 20243 reactions
    CivitAI

    Doesn't work, I keep getting this Error.

    You do not have CLIP state dict!

    zznakeDec 22, 20241 reaction

    Had the same error today when I was trying the model out. I solved it by using three vae files that I found by googling. No idea if I am doing it right, but I am using ae.safetensors, clip_I.safetensors and t5xxl_fp16.safetensors as the vae

    MetaGenJan 22, 2025

    @zznake聽where do you put ae.safetensors? I assume clip_l goes in models/CLIP folder and t5xxl_fp16 goes in models/VAE?

    6921740Feb 3, 20251 reaction

    @MetaGen聽ae.safetensors goes in the VAE folder. t5xxl and clip go into the text encoder folder

    MetaGenJan 22, 20252 reactions
    CivitAI

    What should I type in models.yaml in order to add this model to my FluxGym training models?

    And do you have recommendations for steps and such? To create a character lora.

    BeLunaMar 1, 20251 reaction
    CivitAI

    Hey, I'm new to this website, can I not use this model to generate a image right here with the website generator? I can't select the model. There's only one flux model.

    victory_catNov 17, 2025

    No one responded to this and it has been 9 months, however I will put the answer in case anyone is curious. CivitAI now only allows a limited number of models to be used for generation, so if this model isn't in the list, you can't generate from it in the website.

    supandifoolJun 19, 2025
    CivitAI

    There is this new quantization technique called svdquant nunchaku, which is smaller is size and very fast, about 3x times faster. Any plans to release your unchained model in this svdquant format.

    Checkpoint
    Flux.1 D

    Details

    Downloads
    28,565
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/25/2024
    Updated
    5/13/2026
    Deleted
    -

    Available On (2 platforms)

    Same model published on other platforms. May have additional downloads or version variants.