
Join The Tinkerer on Whop to get early releases, private pages, and the Tinkerer Discord role - all in one place. š Join on Whop
After a ton of requests, Iām finally rolling out CyberRealistic Flux (FLUX.1 dev)! Itās designed to make realistic images, both safe-for-work and not-so-safe-for-work. Itās not perfect yet, but itās a solid start and sets things up for whatās coming next.
Heads up: The (non AIO release) doesnāt include clip models or VAE. Youāll need to grab those separately here.
(clip_lorg.safetensors = orginal clip_l file. clip_l.safetensors is the CLIP-SAE-ViT-L-14 one)
Settings I Use:
Sampling method: DPM++ 2M / Euler
Sampling steps: 20/30 Steps
Distilled CFG Scale: 2ā3.5 (Samples were done at 3.5, but lower values often give better results)ā ļø A Quick Warning:
This model can make mature or sensitive stuff. Youāre in charge of what you create, so use it wisely (or at least get creative with it).
š¬ Want Better Prompts?
Thereās a custom ChatGPT thatāll whip up great prompts for CyberRealistic Flux:
š [Try it now on ChatGPT]
Description
CyberRealistic Flux V1.0 is the first release in the new Flux line. Itās made to handle both SFW and NSFW content with ease, offering clean lighting, strong detail, and a smooth, natural look. This version lays the groundwork for everything thatās coming next, so consider it the starting point for the Flux journey.
FAQ
Comments (63)
š² I hadnāt expected this!
Are you planning more of the Flux line?
There are things I want to improve. So probably yes :)
well well well the day that i have waited for
This has to be one of my most anticipated models! thanks cyber it looks great! cant wait to check it out but I'm one of those common folk with a 3060 12G so I gotta wait for a quantized Q4 gguf version or nf4, definitely hyped for this though
I have 12GB VRAM too (4070) and can run Fp8 models just fine in ComfyUI, Fooocus and Invoke. The only problem is it takes a long time to load when fired up for the first time but after that its smooth sailing.
12GB GPU can run this also 'fine' in Forge.
I can make you f8 scaled, if you want, not sure if it would run in Forge, but it will in ComfyUI.
As for NF4, Im working on creating something like NF4v3, but it might take ages, so..
Anyway I think you can make NF4v2 relatively easy in Forge.
CyberdeliaĀ I'll test it and let you know, forge is what I use, I know I tried fp8 before and had issues, I believe the issue isnt running it, its that the generation time is awful and it ruins the experience, like 2minutes for 1 image or something like that, whereas I get like 30-40s an image with nf4v2 and GGUF 4, I'm going off memory I havent messed with flux in a good few months, but I'll get back to you on more precise comparisons between running fp8 on my rig and gguf/nf4v2 at the very least I can say the experience so far hasn't been ideal
Back after testing, firstly I gotta say, incredible work cyber! undoubtedly the best flux model I've ever worked with, prompting is very intuitive, I was using a mix of booru tag based prompting and natural language and the model got it right 80-90% of the time, the NSFW in particular (which is mostly all I tested for now dont judge me lol) is outstanding, I dont think I have ever used a flux model (even if it is labeled NSFW) that didnt need a little help from loras to get anatomy right, but cyberreal accomplishes that like a boss, 10/10 as far as appearance, details, and prompt comprehension. Sadly for me (and it very well could be something Ive done wrong with flux idk) each single image took 3:39s to generate at 30 steps, 1cfg, it's a testament to this model that I still cranked out like 20 images at that speed, cause it was just too good, it was like trying a dab for the first time when youve only ever smoked weed, very exciting, it definitely marks a new age of Flux models imo, next level shit, but 3:39s really does dampen the experience, so while fp8 worked and I love the model, I'm still really hoping a GGUF or nf4 version pops up in the near future, cause those are the only models I can get a reasonable speed with for whatever reason.
MescalambaĀ Sounds ambitious I wish you luck, I would love to experience something like an nf4v3, thus far for speed, best Ive experienced is nf4v2 hyper, I tend to use that more than anything else cause its only like 30sec an image which my adhd can handle just fine. Thanks for the offer for scaled, I never even heard of fp8 scaled Ill have to inform myself on it better, it would have to work in forge cause Im really not a fan of comfy, that UI setup irks me idk why, it just does
UrameshiĀ fp8 scaled is basically another way to quantize and store models, think of it as a lot improved fp8. Its not like Q8, but should be something like Q6. Also in case someone has 40xx or newer GPU, it runs very fast.
MescalambaĀ well it sounds worth trying at least, as it stands the regular fp8 I just cant really get in there and have fun with cause generations are just so long
Consider me surprised. Did you finetune it, or its LoRA merge thing?
Apart that, Chroma is fairly "ready" to be used and I think it could benefit from well done LoRA merge (or train).
It's a bit of a mess. I tried a lot in the beginning and did get some good results. But this was done always in between creating my other checkpoints. Still, I kept experimenting. This is basically a mix of all those attempts. The result is decent overall, but I also need to look more into how others have approached this. It's possible I went completely the wrong direction. In the meantime, Iām working on training, so the next version could easily be better or a total disappointment (story of my life).
CyberdeliaĀ In general, Schnell is easier to train, de-distilled Schnell is actually probably even easier. But its also rather complex and requires a lot of final tweaks to get something out of it.
Not sure how hard to train is Chroma, but people already did some LoRAs and it didnt seem that hard. Plus it should be fairly easy to train further (it kinda needs it anyway still). I find it fairly difficult to prompt, so maybe its not that good choice for generic public. But it can make porn so..
AND HE DELIVERS
Awesome. Quick question though. The encoders on your Huggingface are they the standard Flux ones, or have you reworked them?
clip_l.safetensors is non-standard (https://huggingface.co/zer0int/CLIP-SAE-ViT-L-14/tree/main). It performed well in my setup, but results may vary depending on your use case.
I will link/add the original versions later
CyberdeliaĀ Ok Cool I'll give it a looksee. ;)
CyberdeliaĀ I would say that text focused one is still best. Mostly cause it doesnt give just "sharp text", but sharp everything. Also works on everything from SD15 till .. well even future stuff that uses CLIP-L.
https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/tree/main?show_file_info=ViT-L-14-TEXT-detail-improved-hiT-GmP-HF.safetensors
MescalambaĀ Did you tried this version?
https://huggingface.co/zer0int/CLIP-KO-LITE-TypoAttack-Attn-Dropout-ViT-L-14
CyberdeliaĀ Yea I did, yesterday. Still believe that one I linked is best. Altho I dont do much FLUX, so it might be different for that.
In case of FLUX, CLIP doesnt have that much impact, only to some degree on scene composition and I would say final details (it basically ties image for FLUX together). Its not really needed as FLUX can be used even without CLIP, but since it was trained with it, its better used with that.
T5-XXL is more important, like GNER or Alpaca tuned T5-XXLs. Altho FLAN is probably best for most.
CyberdeliaĀ Forgot I could link those too, but Im sure you can find them on HF anyway.
Wouldnt mind if you talked/answered in chat tho, but dont want to bother you too much. Also can try to help with my limited options, if needed/wanted.
Cyberdelia is now just too powerful...
Power to the worthy ones! šŖ
Fantastic work! Is it based on the distilled version of Flux Dev, or is it mixed with the de-distilled Flux Dev by nyanko7?
Also, are you planning to release an FP16 version of this checkpoint? The higher FP16 precision can bring out more details and it should run fine on cards like RTX 3090-5090.
Thanks again for all your cool work, its's really appreciated!
Finaly!!!! FLUX VERSION!!!! Awesome!!!
CyberRealistic, always the first choice!!!
This is awesome! Hope you do a Chroma version too.
Awesome
GGUF versions will be appreciated.
I've never used FLUX do I have to do anything special? certain variables or something?
Jesus 11GB these better give me the greatest gens of all time
What are you using? Forge? ComfyUI? There are articles how to run Flux here on Civitai. It's not that difficult btw.
CyberdeliaĀ i use a1111 is it not supported? thank you
You just need 24 Gb VRAM to fit model with clip models unless you wanna wait 10 minutes while generation done from shared memory.
What sampler combo and guidance would you recommend?
DPM++ 2M -> Simple (Euler is also a good choice)
Any chances for GGUF Q4 or NF4 edition?
I'll get a new card, just for this
Need Q4 or NF4 to fit 12 Gb VRAM GPU
But this fits my 12 gb 3060 no issues.
I have tested on several systems I have and 12GB (RTX A2000) and 16GB (RTX A4000) no issues - 8GB (RTX A2000), well that's slow :)
Around 3s/it on my 3060 Ti 8GB with less than 2 mins for a 1024x1024 in Forge; FP8 and quantized Flux models should almost always work on 8GB cards and up
Why not release full bf16 version so that people can actually properly quantize it? Regular fp8_e4m3fn is kind of bad in comparison to proper scaled float8 or GGUF Q8_0
Yes, I know. Iāll offer the next versions that way too. This one was more of an experimental version, bundling the work from the past few months.
Why not stop complaining and appreciate the incredible work that Cyber has put in here, I'm running it on a H100 SVX, this model is a revelation! Mind = Blown!
ronikushĀ H100:))))))
ronikushĀ caysinjaeven455Ā The NVIDIA H100 is basically the bare minimum. I mean, if youāre willing to suffer through a grueling 2-second wait per image, be my guest. Patience of a monk, really...
ronikushĀ Thanks, by the way! Really appreciate the comment.
CyberdeliaĀ Yes, of course its the minimum.
ronikushĀ Complaining? I was just asking and gave reason why I was asking. About appreciating work. Yeah appreciated. I was actually going to suggest to CyberdeliaĀ to check out the model my team works on called Chroma. lodestones/Chroma Ā· Hugging Face
S1LV3RC01NĀ Dunno why they were so mad lol. I had the same question. Appreciate your work on Chroma, by the way. Solid model itself, although I still have much to learn about using it to its full potential.
erickj92790Ā Join the discord and check out the chroma-gens channel as well as the workflows-tips-n-tricks threads. Might help a bit.
Would you consider releasing a version embedded in Clip and VAE etc. files?
Next version for sure. Maybe for the released version this weekend.
Cyberdelia Thank you very much, thank you very much. We love you
Hey) What about nunchaku SVDquant version? It's new quantization for flux models. Faster than gguf and with almost no quality loss
Running it on a H100 SVX. Best Flux Checkpoint, absolutely next-fkn-level! Thank you so much for your work!
Best FLUX fine tune on here
Not sure if this was a me issue, but if anyone else is having difficulty with getting loras to work try installing this clip model, it basically resolved most my issues
https://civitai.com/models/1805024/hidream-clip-distilled-for-flux-sdxl-sd
Great shout! THanks that fixed Lora issue for me!
Finally! Great job as always. The only issue Iāve noticed is the somewhat poor compatibility with LoRAs that mimic a person. The likeness is noticeably off for some reason.
I had similar issue and using this clip model it improved things a lot, but nothing is perfect.
https://civitai.com/models/1805024/hidream-clip-distilled-for-flux-sdxl-sd
Also, str of 1.4 may be needed,
increased CFG may be needed, 3.5 misses on some but at 6 it works.
Test with 1 character lora, 1024x1024, Euler simple, 20 steps, 3.5 CFG and prompt for a basic close up face portrait of the character to see if you can find them and then go from there.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.










