
Join The Tinkerer on Whop to get early releases, private pages, and the Tinkerer Discord role - all in one place. š Join on Whop
After a ton of requests, Iām finally rolling out CyberRealistic Flux (FLUX.1 dev)! Itās designed to make realistic images, both safe-for-work and not-so-safe-for-work. Itās not perfect yet, but itās a solid start and sets things up for whatās coming next.
Heads up: The (non AIO release) doesnāt include clip models or VAE. Youāll need to grab those separately here.
(clip_lorg.safetensors = orginal clip_l file. clip_l.safetensors is the CLIP-SAE-ViT-L-14 one)
Settings I Use:
Sampling method: DPM++ 2M / Euler
Sampling steps: 20/30 Steps
Distilled CFG Scale: 2ā3.5 (Samples were done at 3.5, but lower values often give better results)ā ļø A Quick Warning:
This model can make mature or sensitive stuff. Youāre in charge of what you create, so use it wisely (or at least get creative with it).
š¬ Want Better Prompts?
Thereās a custom ChatGPT thatāll whip up great prompts for CyberRealistic Flux:
š [Try it now on ChatGPT]
Description
A quantized Q8_0 version of CyberRealistic Flux v2.5, optimized for faster inference with minimal quality loss. Perfect for running on lower-VRAM setups.
FAQ
Comments (14)
Given that both are the same size how does this compare to the fp8 "normal" version?
This is an awesome Flux model. Do you know why it isn't available anymore for generation on-site ? Not exactly a Christmas gift š„ Thank you
Itās quite expensive to get all models into the generator via the auction. Sometimes you have to make choices.
https://civitai.com/auctions
Sorry for the newbie questions š. Does it mean most of the time, you have to place a bid yourself to keep your models in the generator, as an investment, and your ROI is reached through the use people will make of them ?
@ThraoĀ Yes, and mostly I put models in the generator that barely generate anything anyway. For me itās mainly about users actually using it. And normally Flux was also included by default but I temporarily didnāt have enough Buzz available. It will be back in there next week. If users are already going to say that I didnāt give them a good Christmasā¦
@CyberdeliaĀ I didn't meant to criticize š. The fact is after using your CBRealistic Pony model a lot, including with early bird access, I began to explore Flux, and your Flux model is great, I'm having a lot of fun with it, so I was disappointed and happy to hear it'll be back soon ! But I absolutely understand your point of view now you explained how it works. BTW I'm surprised to read it barely generates any revenue. The fact I and probably hundreds of users spend thousands of yellow buzz on-site each month with your model doesn't help to sustain your work ? š¤ That would be unfair.
Meanwhile I placed my first ever bid on your Flux Model. Hopefully it can help š
@ThraoĀ You don't have to bid - I will place it in the generator next week.
My pleasure. You said Itās quite expensive to get all models into the generator via the auction. I'm fan of your models. So that bid was a way to thank you, nothing more. Last, if you tell me this was totally useless or inappropriate, I'll dig into the website or Discord to understand better the underlying processes... Wishing you a wonderful holiday seasonš„³
@CyberdeliaĀ FYI my bid was accepted rank 114.
Now it's done would you be so kind to explain me if it changes something more for your model than having it available in the gen? I'll go to bed less stupid tonight šThank you
Any chance we can get a Flux 2 version of this? This is all I ever use for faces and people.
Awesome checkpoint! Looking forward to what the next version brings.
Awesome model. For those of us with 6Gb VRAM (not others!), the following:
You can use the v2.5 (GGUF Q4_1), but it's a "regular" GGUF model, meaning the benefit of quantization is that we can actually run it with 6GB, not that it's faster.
There are models out there that require 4 (Flux.1S merges) or 8 (Hyper) steps, which are in fact much faster, so this is not that.
EDIT:
I found out the Hyper Lora can be used with this and works great.
There is also Nunchaku, which I haven't tried yet.
END EDIT.
Setting this up is very confusing for newbies. The GGUF workflows here on Civitai are very confusing and possibly wrong too.
You can get the "official workflow" for GGUF for ComfyUI from this CivitAi article:
https://education.civitai.com/quickstart-guide-to-flux-1/
Under the section 5. GGUF Quantized Models & Example Workflows ā READ ME!
The download links don't work so you have to copy paste them into a new tab to download the workflow.
You need to get the VAE and the two text encoders separately.
The name of the VAE is ae.safetensors and that goes in the folder ComfyUI\models\vae
The name of the two text encoders are clip_l.safetensors
and t5xxl_fp8_e4m3fn.safetensors. These go in the folder ComfyUI\models\text_encoders
This model itself goes in the folder ComfyUI\models\unet
Note that there is also a t5xxl_fp8_e4m3fn_scaled.safetensors. That one will not work and give an error message.
There could be others also, like a GGUF one.
You also have to install the GGUF package from the ComfyUI Manager ("Extensions" in top right corner), that provides some of the nodes in the workflow.
Once you have all that it works.
Sample generations:
dpm++ 2M, beta scheduler (lot of disinfo out there about using others, but this seems to be the best by a long shot)
768 x 512
20 steps - good and about 6 minutes per image on my ageing Nvidia 1660 GPU
15 steps - very similar, but anatomy problems? About 4.5 minutes on mine.
10 steps - also a clear image but totally different than the 20 step one. About 3 minutes per image on mine.
5 steps - FAIL.
Experiment with this.
So that, it's working, probably should buy a new computer though. Corrections welcome, I don't claim to know this stuff.














