Nepotism • XII
The pinnacle of Flux evolution. Trained on 8.5 million images, over 124 epochs, and more than 2.1 million steps, Nepotism XII doesn’t just improve— it redefines what’s possible with Flux.
🔥 What’s New in XII
Massive-scale training across a vast, diverse dataset—every style and nuance captured.
Precision and polish leveled up: textures, lighting, composition—all sharper, richer, and more lifelike.
Unmatched prompt fidelity: higher style compliance and nuanced interpretation—complex (and simple) prompts are no match.
Style spectrum master: effortlessly handles photorealism, anime, stylized art, abstraction, and hybrids—no overshoot, just precision following your intent.
Noise-free clarity: only minimal to moderate artifacts on highly intricate scenes and edge case styles/concepts—noise is gone, detail reigns.
Stable as lightning: performance optimized for fast, consistent iteration—even on mid-range GPUs.
🚀 Why XII Crushes It
Ultra-deep training foundation means bigger learning volume → richer representation → more reliable outputs.
Next-gen DiT architecture refined to perfection—usability reaches new heights.
LoRA and CLIP synergy: ready for prompt tuning with minimal weight adjustments—compatible with all your favorite fine-tuned workflows.
Practical speed on real rigs: 20–32 steps in 15–20 s on a 4080, delivering near studio-grade results in under a minute per image.
⚙️ Recommended Setup
Steps: 20–32 (8–12 steps work too, but sacrifices some detail).
FLuxGuidance: 2-4.5 (lower=more abstract, higher=more on the rails. I use 2.8 & 4.5)
LoRA Strategy: Start with vanilla; dial in low LoRA weights for precision tuning.
T5‑XXL: Use the Flan T5‑XXL for top contextual understanding.
CLIP L: A long-context clip L is essential. I recommend LongCLIP-GmP-ViT-L-14
📊 Performance Snapshot (4080 GPU)
Cold load (no LoRA): ~1.0–1.1 s/it
With LoRA (warm): ~1.0–1.3 s/it
With LoRA (cold): ~2.0–3.5 s/it, quickly dropping after warm-up
🎯 Ideal For
Content creators with mid-tier GPUs chasing FP16-level results
Artists and developers seeking broad style versatility and prompt fidelity
Workflows tight on time but unwilling to compromise on image quality
Your best outputs fuel my motivation for this project. Upload, show off, and help me make the next one even better!
(also accepting dataset donations, dm for requirements)
BONUS TOOLS:
Tenos Discord Generation Bot: An image generation bot that uses Comfy's API and Discord's API in a workflow format that focuses on creation over configuration.
Flux Prompt Crafter GPT: Crafts highly imaginative and visually detailed Flux prompts.
Bobs Latent Optimizer for ComfyUI: This custom node for ComfyUI is designed to optimize latent generation for use with FLUX, SDXL, and SD3 modes. It provides flexible control over aspect ratios, megapixel sizes, and upscale factors, allowing users to dynamically create latents that fit specific tiling and resolution needs.
Bobs LoRA Loader for ComfyUI: A custom LoRA loader node for ComfyUI with advanced block-weighting controls for both SDXL and FLUX models. Features presets for common use-cases like 'Character' and 'Style', and a 'Custom' mode for fine-grained control over individual model blocks.

Description
FAQ
Comments (12)
I think there went something wrong. Your version v4 [DiT] has SD 1.5 as base version according to the description on the right hand side in the info box. Is that correct?
just civit being civit, its flux
Do these go in the unet or regular checkpoint folder/loader (the big main files)?
unet unless you're using an AIO version then checkpoint
all AIO versions work great on Ruined Fooocus 1.56 ! thanks a lot !!
Q8 doesn't work, don't waste your time.
Make sure to update your GGUF loader node in ComfyUI and you need to use a NumPy version before v2.
This error:
mat1 and mat2 shapes cannot be multiplied (4096x64 and 256x768)
your node is out of date
@BobsBlazed I updated the GGUF loader node and then had to revert NumPy back to v1.24.3 and now it works. Something was changed in NumPy v2+ that doesn't allow it to work. Thanks.
@jaykrown np, glad you were able to get it working
Sadly, I get a very similar error in Forge. Looks like you did something slightly differently than the other Q8 model I'm using?
Either way, just gonna try to use the fp8 model for now.
Getting this error in Forge. Any fix?
@nunyabizness1 just update forge
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
