Models here will likely be builds using ideas I have for broad spectrum improvements to Flux using low to no compute, usually using methods I call "back-of-van ablation," and most focusing on improving generalization within the base model as opposed to focusing on particular concepts, running under the assumption that a lot of information is latent in the model and simply difficult to "access" due to a safety tuned unet.
Description
This model is more sensitive to lora and other add ons, and is generally a great deal more receptive to learning new concepts that previously were explicitly "forbidden" due to intense overtraining.
Also, it should be more adherent than the base flux model, with less artifacting.
This is the nf4 version, plus text encoders etc; as such, it can be plugged directly into flux without issue.
FAQ
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


















