GREAT SCOTT! Vivid insanity is here!
===========================================================================
KR34M - 11/25 - Let's give FLUX/KREA the DR34M makeover. This checkpoint excels at photorealism and abstract artwork. We hope you enjoy what's likely our last FLUX release!
Note: NF4 is actually K4_S in GGUF format. FP8 is actually Q8_0, also GGUF!
We recommend the BF16 variant + T5 BF16 if you have the VRAM.
Note: Use LoRA with C4PACITOR at slightly lower than normal weights for best results and to avoid anatomy blending.
===========================================================================
C4PACITOR models are created by DR34MSC4PE with all the trappings you've come to expect as well as some cool bonuses:
Enhanced realism and photo-realistic concepts, trained on high quality datasets with the latest techniques.
Specific anime/illustration tuning to introduce some lost artistic concepts
NSFW tuned and capable of realism and artistic/anime images. Female anatomy is well represented with additional fine tuning in the works.
Exceptional performance with character/other/stacking Lora
Like our work? Buy us a coffee: https://ko-fi.com/dr34msc4pe
===========================================================================
DR34MSC4PE is
c0ur4ge (training/qa/inference code) /
eraser851 (training/captions/tooling code/data/qa)
===========================================================================
Recommendations by base model:
dev - (d_v1/d_v2/a_v)
===========================================================================
CFG: 3-7
Steps: 22-60
Sampler: RES_2M
Scheduler: Beta/Beta57/Bong Tangent
========================================
Hyper8/L16HT - (x-series)
===========================================================================
CFG: 3-6
Steps: 16-32
Sampler: DEIS
========================================
schnell - (s_v0/s_v1):
===========================================================================
Note: 100% Schnell Base
Steps: 2-14
Sampler: DEIS
===========================================================================
Description
If my calculations are correct, when this baby hits 88 miles per hour... you're gonna see some serious shit.
FAQ
Comments (25)
Can this model be used to train LoRA?
It COULD but I’m not sure what the outcome would be! I’d naturally make sure you use a FP16 if you try! That said, you can use Lora trained off FLUX.Dev just fine as well!
The dev full model seems to make really awkward bodies with extra limbs and stuff for me. I think the beta was actually better.
I’d suggest trying different resolutions/step counts - the Beta is likely aesthetically a bit better, v1 has better NSFW concept knowledge but is, admittedly undertrained still. v2 is in the works as a full checkpoint tune vs Lora pipelines.
If you are using Forge, update your copy and then try Forge Realistic (slow) or Forge Realistic for the sampling method, or you can use DEIS, although I don't find that to be the best on Forge. Also on Forge, use Normal rather than Simple schedule type, and consider 25 steps for the render. Those settings should get you a good render. Afterward, you can dial the settings back if you want it to go more quickly. I hope this helps.
@arnesacknesson553 Thanks for this - I really like DEIS in comfy but this is not the first time I've heard of some "performance differences" between the two. Thanks for the info/ helping out!
@c0ur4ge Hey, thanks for the kind words, and the buzz, but really, I should be thanking you. I like your model, and I appreciate your posting it. I just thought it was a shame that some of the newer folks weren't getting the good results I know it can render!
Please give the new rebalanced v1.1 a shot! :)
@c0ur4ge Ok, will do, thanks for making it.
@c0ur4ge I tried v1.1 just now - did batches of 4 images using the same prompts I've been doing today with other models. Mostly I am using Ipmdm+beta but I tried euler+simple, euler+beta, deis+simple, deis+beta too. I'm getting the same pretty faces with mangled bodies as before and on all 4 images on each batch. I tried with and without lora. Mangled bodies happens sometimes with all the flux ckpts but it's not the norm. I'm using comfyui but I don't know why it would be different than forge. Sorry I don't know what's up, but it's not improved for me. I'll leave the model on my hard drive for a few days. If there's something else I should try let me know.
@EricRollei21 I would suggest trying with different resolutions then. What are you presently trying? And with what step counts? If you’re asking for something it “doesn’t know” from a nsfw/anatomy perspective this can happen. Please ensure you are using natural language to prompt C4PACITOR for such concepts as well - while the traditional "1girl, solo, nsfw" etc work, they tend to have more issues like this.
@c0ur4ge I'm using natural language prompts for T5 clip, usually flux guidance 3.5 (real guidance 1) with 40 steps. My prefered combo is ipmdm+beta. This works very well for many other ckpts that I use. Can't figure out why C4pacitor is mangling things. I'd post comparisons but they are ugly.
@EricRollei21 Send me a DM - would be useful for me to see if I can find any potential culprits in the prompts and ultimately, the dataset to prevent them from making anything worse later.
What resolutions (and if raising the CFG to 4.5 helps) as well would be helpful! Thanks for the feedback!
I think your NSFW training is too strong and decreases the quality of anything else.
Most often results are glitchy or uncanny weirdness, but often NSFW triggers with innocent prompts. For example, I was trying to render "close up photograph of granny stealing oranges" and she was having BJ instead of OJ.
Beta is slightly safer, but still lowers quality too much.
Thanks for this feedback (and laugh) - v1.1 takes this to heart. We hope you’ll find it balanced but still capable of these … unique abilities at higher guidance settings.
The unfortunate truth is that so-called 'training' of LLM or Stable Diffusion models actually ruins them. OR these 'modified' models should only be used for their 'new' abilities. With image generation, it is almost always better to use LORAs.
@blobby99 This is only somewhat true - in the case of these initial versions of C4PACITOR, its the result of merging trained lora layers between various checkpoints/back to the base model, etc. That said, if the "new ability" is half NSFW (but HQ) content and half stylistic tendencies you want picked up (essential reg images to a degree) it doesn't inherently wreck things.
Where v1 "comes on strong" is due to the strength at which certain layers were merged with a focus on anatomy and not the preservation of other things. 1.1 is based completely on Dev/Schnell with less potency on the NSFW concepts but enough to wear they can be coaxed out at different step counts (Schnell) and Guidance (dev).
To the point, V2 is a full checkpoint tuned on LOTS of stuff and has no such encumbrances thus far.
lol this made me download this
Can you put out an NF4?
Hey there! I actually do plan to do a bump release along with another variant of v1 while ironing out some "unforeseen" issues with our distributed training setup. I'll see about getting this handled in a few days if I can get the time!
@c0ur4ge Nice!
I have a 4080 Super so unfortunately I can't use any of the current ones without running out of VRAM; but I've been able to use NF4 FLUX models.
I'd appreciate it!
@SickMoonDoe Another suggestion would be to load either of them as FP8 - they support it if you have the system memory to load the checkpoint first. Should put it in range (tested in Comfy)
@SickMoonDoe You can absolutely load the fp8 with a 4080 super and even use Loras with comfy, I have tested myself as I have both a 4080 super and 4090, part of it also is how your launching try with --force-channels-last --dont-upcast-attention --normalvram and also try PyTorch attention with version 2.4.1 cuda 12.4 setting expandable segments (it’s a environmental variable) instead of xformers and using any special cuda variables. Edit: though yea it means allowing offloading and sharing resources with cpu but I don’t know what you have paired with your 4080 super but my 5950x and higher system memory seem to pick up the slack with my 4080 super far better than I expected. I got a 4090 from work and was surprised when I realized I didn’t have it as bad as I thought i did
@joehorse it might be time for me to switch from InvokeAI to ComfyUI. It crashes for me in InvokeAI with a VRAM OOM error :'(
@SickMoonDoe honestly I don’t always like comfy but man do I like quality and you can always rip workflows from this site including mine and not have to learn much. It’s much better with vram.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.









