GREAT SCOTT! Vivid insanity is here!
===========================================================================
KR34M - 11/25 - Let's give FLUX/KREA the DR34M makeover. This checkpoint excels at photorealism and abstract artwork. We hope you enjoy what's likely our last FLUX release!
Note: NF4 is actually K4_S in GGUF format. FP8 is actually Q8_0, also GGUF!
We recommend the BF16 variant + T5 BF16 if you have the VRAM.
Note: Use LoRA with C4PACITOR at slightly lower than normal weights for best results and to avoid anatomy blending.
===========================================================================
C4PACITOR models are created by DR34MSC4PE with all the trappings you've come to expect as well as some cool bonuses:
Enhanced realism and photo-realistic concepts, trained on high quality datasets with the latest techniques.
Specific anime/illustration tuning to introduce some lost artistic concepts
NSFW tuned and capable of realism and artistic/anime images. Female anatomy is well represented with additional fine tuning in the works.
Exceptional performance with character/other/stacking Lora
Like our work? Buy us a coffee: https://ko-fi.com/dr34msc4pe
===========================================================================
DR34MSC4PE is
c0ur4ge (training/qa/inference code) /
eraser851 (training/captions/tooling code/data/qa)
===========================================================================
Recommendations by base model:
dev - (d_v1/d_v2/a_v)
===========================================================================
CFG: 3-7
Steps: 22-60
Sampler: RES_2M
Scheduler: Beta/Beta57/Bong Tangent
========================================
Hyper8/L16HT - (x-series)
===========================================================================
CFG: 3-6
Steps: 16-32
Sampler: DEIS
========================================
schnell - (s_v0/s_v1):
===========================================================================
Note: 100% Schnell Base
Steps: 2-14
Sampler: DEIS
===========================================================================
Description
Per the request of the people - the FP16 dev variant of v0. Bigger digits, guaranteed. ;) Aside from a HyperFlux variant, this will be the final release in the series. Post your results below!
FAQ
Comments (9)
Cool new checkpoint.
Inspired by your d-v2 NF4, which I have used for a lot of images (and a few bounty wins), I quantized your 23GB checkpoint into a Q5_gguff_KS with Easy Quantization. I'm using it with a Q8_gguf t5 and the usual ae and clip_l in Forge.
The new quant weighs in at only 8GB, so it lets me keep everything, including LoRAs, in VRAM while I'm working.
So far, the Q_5 looks pretty good. I'll post some pix with it here if you don't mind my squeezing it down. If you'd rather I didn't mix them in with your official checkpoint images, I'll understand.
My first post with it, a portrait of a pink-haired woman, was made with the Q5 and a fictional character LoRA from KalitexAI.
Cheers!
Cool! I was actually going to work on the NF4 in the next day or so but if you want to provide the GGUF, I’d happily credit you and upload it here! I’d be interested in trying it out personally as I haven’t GGUF’d any of mine so far.
Else, if you want to point me to a link to Easy Quantization's github page (or equivalent) - I'd appreciate it! :)
@c0ur4ge Unfortunately large uploads don't work well for me, I live in the rural hinterlands with Internet via DSL through a local phone service, so it would take about 12 hours to upload 8GB, assuming no flutter on the line caused and interuption and timeout. Download out here is faster, but still very slow by modern standards. You could run the quantization program at your end in far less time. Here's the link (props to RainLizard who made this available to the community):
https://github.com/rainlizard/EasyQuantizationGUI
When you pull it down, the bat file will set up its own Python venv from the requirements file as long as it can find Python and PIP. Once the libraries are all there, the program works well. It's a standalone app with a minimal gui, with a choice of gguf quants from Q2 to Q8 (most with K_S or K variants). I find Q4-Q6 K/K_S types most useful, given that fp8's in various forms are usually available, and 4, 5, and 6 ggufs are still mostly faithful to the original.
Be aware that when the program runs, it first makes a larger 23GB temp file, even if you give it a full model in the first place, but this is deleted when the program is done. I believe the torch library does this to ensure a model is re-expanded to fp32 before the gguf library does the quantization.
Cheers!
@arnesacknesson553 Roger that - appreciate the rundown, link and continued support! I can absolutely get the quants made, I just had less-than-great luck with previous attempts when I went to check the quality. I'll give this a swing. Thanks again! :)
what does this do? I mean, what does it add to flux?
Many, many things from art styles and concepts that were removed. Better, well-rounded human anatomy; a different, more vivid and textured fine-tune. v3 has some balancing to do still but I suggest you try it and see for yourself! :)
@c0ur4ge you should put this in the model description, it's very confusing as it is right now
@lucidobscura It's there! Along with usage guide, steps etc - I think you may just need to click "Show more" near the current description. :)