Nepotism • XII
The pinnacle of Flux evolution. Trained on 8.5 million images, over 124 epochs, and more than 2.1 million steps, Nepotism XII doesn’t just improve— it redefines what’s possible with Flux.
🔥 What’s New in XII
Massive-scale training across a vast, diverse dataset—every style and nuance captured.
Precision and polish leveled up: textures, lighting, composition—all sharper, richer, and more lifelike.
Unmatched prompt fidelity: higher style compliance and nuanced interpretation—complex (and simple) prompts are no match.
Style spectrum master: effortlessly handles photorealism, anime, stylized art, abstraction, and hybrids—no overshoot, just precision following your intent.
Noise-free clarity: only minimal to moderate artifacts on highly intricate scenes and edge case styles/concepts—noise is gone, detail reigns.
Stable as lightning: performance optimized for fast, consistent iteration—even on mid-range GPUs.
🚀 Why XII Crushes It
Ultra-deep training foundation means bigger learning volume → richer representation → more reliable outputs.
Next-gen DiT architecture refined to perfection—usability reaches new heights.
LoRA and CLIP synergy: ready for prompt tuning with minimal weight adjustments—compatible with all your favorite fine-tuned workflows.
Practical speed on real rigs: 20–32 steps in 15–20 s on a 4080, delivering near studio-grade results in under a minute per image.
⚙️ Recommended Setup
Steps: 20–32 (8–12 steps work too, but sacrifices some detail).
FLuxGuidance: 2-4.5 (lower=more abstract, higher=more on the rails. I use 2.8 & 4.5)
LoRA Strategy: Start with vanilla; dial in low LoRA weights for precision tuning.
T5‑XXL: Use the Flan T5‑XXL for top contextual understanding.
CLIP L: A long-context clip L is essential. I recommend LongCLIP-GmP-ViT-L-14
📊 Performance Snapshot (4080 GPU)
Cold load (no LoRA): ~1.0–1.1 s/it
With LoRA (warm): ~1.0–1.3 s/it
With LoRA (cold): ~2.0–3.5 s/it, quickly dropping after warm-up
🎯 Ideal For
Content creators with mid-tier GPUs chasing FP16-level results
Artists and developers seeking broad style versatility and prompt fidelity
Workflows tight on time but unwilling to compromise on image quality
Your best outputs fuel my motivation for this project. Upload, show off, and help me make the next one even better!
(also accepting dataset donations, dm for requirements)
BONUS TOOLS:
Tenos Discord Generation Bot: An image generation bot that uses Comfy's API and Discord's API in a workflow format that focuses on creation over configuration.
Flux Prompt Crafter GPT: Crafts highly imaginative and visually detailed Flux prompts.
Bobs Latent Optimizer for ComfyUI: This custom node for ComfyUI is designed to optimize latent generation for use with FLUX, SDXL, and SD3 modes. It provides flexible control over aspect ratios, megapixel sizes, and upscale factors, allowing users to dynamically create latents that fit specific tiling and resolution needs.
Bobs LoRA Loader for ComfyUI: A custom LoRA loader node for ComfyUI with advanced block-weighting controls for both SDXL and FLUX models. Features presets for common use-cases like 'Character' and 'Style', and a 'Custom' mode for fine-grained control over individual model blocks.

Description
⚙️ Recommended Setup
Steps: 20–32 (8–12 steps work too, but sacrifices some detail).
FLuxGuidance: 2-4.5 (lower=more abstract, higher=more on the rails. I use 2.8 & 4.5)
T5‑XXL: Flan T5‑XXL
CLIP L: LongCLIP-GmP-ViT-L-14
FAQ
Comments (9)
Any plans for a Q8 for the new version? =)
Nepotism XII
in forge error:
RuntimeError: Error(s) in loading state_dict for IntegratedCLIP: size mismatch for transformer.text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([248, 768]) from checkpoint, the shape in current model is torch.Size([77, 768]). Time taken: 0.6 sec.
Clip and tx5 your
I highly recommend switching to ComfyUI if Forge is not working for you. It looks like the Flan T5 and/or the Long Clip L are not working in Forge.
Amazing model! It would be nice to see a fp16 version of the last model. Thanks for your job!
Hi, the DIT version doesn't work very well with Forge for me, but the XII version seems to work better. I've found that DPM++ 2M or DDIM/Beta dcfg 3/4 works better than Euler. What settings do you recommend with Forge?
Thanks
any chance you will have a fp16 version of this i can pay for it let me know
My favorite Flux merge out of like 30 I've tried this morning.
Feedback.. Great model, to make it perfect it could improve in diversity.. of both ages and ethnicities. Also, a little hard to make people smile, tends too much to seriousness when compared to other models. I like the object and environment consistency a lot, best one so far.
diversity.. of both ages and ethnicities. 1000% agree, still a very nice Model.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.















