Nepotism • XII
The pinnacle of Flux evolution. Trained on 8.5 million images, over 124 epochs, and more than 2.1 million steps, Nepotism XII doesn’t just improve— it redefines what’s possible with Flux.
🔥 What’s New in XII
Massive-scale training across a vast, diverse dataset—every style and nuance captured.
Precision and polish leveled up: textures, lighting, composition—all sharper, richer, and more lifelike.
Unmatched prompt fidelity: higher style compliance and nuanced interpretation—complex (and simple) prompts are no match.
Style spectrum master: effortlessly handles photorealism, anime, stylized art, abstraction, and hybrids—no overshoot, just precision following your intent.
Noise-free clarity: only minimal to moderate artifacts on highly intricate scenes and edge case styles/concepts—noise is gone, detail reigns.
Stable as lightning: performance optimized for fast, consistent iteration—even on mid-range GPUs.
🚀 Why XII Crushes It
Ultra-deep training foundation means bigger learning volume → richer representation → more reliable outputs.
Next-gen DiT architecture refined to perfection—usability reaches new heights.
LoRA and CLIP synergy: ready for prompt tuning with minimal weight adjustments—compatible with all your favorite fine-tuned workflows.
Practical speed on real rigs: 20–32 steps in 15–20 s on a 4080, delivering near studio-grade results in under a minute per image.
⚙️ Recommended Setup
Steps: 20–32 (8–12 steps work too, but sacrifices some detail).
FLuxGuidance: 2-4.5 (lower=more abstract, higher=more on the rails. I use 2.8 & 4.5)
LoRA Strategy: Start with vanilla; dial in low LoRA weights for precision tuning.
T5‑XXL: Use the Flan T5‑XXL for top contextual understanding.
CLIP L: A long-context clip L is essential. I recommend LongCLIP-GmP-ViT-L-14
📊 Performance Snapshot (4080 GPU)
Cold load (no LoRA): ~1.0–1.1 s/it
With LoRA (warm): ~1.0–1.3 s/it
With LoRA (cold): ~2.0–3.5 s/it, quickly dropping after warm-up
🎯 Ideal For
Content creators with mid-tier GPUs chasing FP16-level results
Artists and developers seeking broad style versatility and prompt fidelity
Workflows tight on time but unwilling to compromise on image quality
Your best outputs fuel my motivation for this project. Upload, show off, and help me make the next one even better!
(also accepting dataset donations, dm for requirements)
BONUS TOOLS:
Tenos Discord Generation Bot: An image generation bot that uses Comfy's API and Discord's API in a workflow format that focuses on creation over configuration.
Flux Prompt Crafter GPT: Crafts highly imaginative and visually detailed Flux prompts.
Bobs Latent Optimizer for ComfyUI: This custom node for ComfyUI is designed to optimize latent generation for use with FLUX, SDXL, and SD3 modes. It provides flexible control over aspect ratios, megapixel sizes, and upscale factors, allowing users to dynamically create latents that fit specific tiling and resolution needs.
Bobs LoRA Loader for ComfyUI: A custom LoRA loader node for ComfyUI with advanced block-weighting controls for both SDXL and FLUX models. Features presets for common use-cases like 'Character' and 'Style', and a 'Custom' mode for fine-grained control over individual model blocks.

Description
smaller, better, faster, stronger 🎶
FAQ
Comments (7)
I saw your examples and their prompt . you only get a good result because you write a very long prompt but when i try to write a prompt that is only 8 - 10 words it doesnt give me any good result
I haven't had the same experience with short prompts, sometimes I only use a single word/letter/emoji and get cool results - obviously anything that vague can yield strange results. One of the largest benefits to flux is its massive prompt comprehension and prompt adherence so using a short prompt is less advantageous (it's like using only half your paint on a canvas), if you're struggling writing prompts please try my GPT assistant for FLUX, give it your 8-10 words and it will give you prompt examples to tweak on your own. https://chatgpt.com/g/g-oODh6sLdt-flux-prompt-crafter-by-bobsblazed
Good guidance levels? I've been experimenting and too high takes away most of the style differences while too low just jumbles it.
Personally I use the default 3.5
if you are using a ksampler use 1
Trying to use flux-based controlnets with it in comfyui makes the process run impossibly slow (as in something that takes minutes on that computer to complete with controlnets in vanilla flux D takes about that long per step) to me.
Anyone else have this issue?
Honestly, almost all Flux controlnets are either flawed or just not working at all. Some that do work are really really HW demanding and thus really slow on anything older than last year.
Only thing you can try is seriously lowering resolution of output, I mean like 1024pix per long side. Then some of them actually work, poorly.
Compared level of cnets on SD15 or even Pony, its a joke. But then, Flux is pretty new still.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
