CivArchive
    Preview 25348779
    Preview 25348989

    Nepotism XII

    The pinnacle of Flux evolution. Trained on 8.5 million images, over 124 epochs, and more than 2.1 million steps, Nepotism XII doesn’t just improve— it redefines what’s possible with Flux.


    🔥 What’s New in XII

    • Massive-scale training across a vast, diverse dataset—every style and nuance captured.

    • Precision and polish leveled up: textures, lighting, composition—all sharper, richer, and more lifelike.

    • Unmatched prompt fidelity: higher style compliance and nuanced interpretation—complex (and simple) prompts are no match.

    • Style spectrum master: effortlessly handles photorealism, anime, stylized art, abstraction, and hybrids—no overshoot, just precision following your intent.

    • Noise-free clarity: only minimal to moderate artifacts on highly intricate scenes and edge case styles/concepts—noise is gone, detail reigns.

    • Stable as lightning: performance optimized for fast, consistent iteration—even on mid-range GPUs.


    🚀 Why XII Crushes It

    • Ultra-deep training foundation means bigger learning volume → richer representation → more reliable outputs.

    • Next-gen DiT architecture refined to perfection—usability reaches new heights.

    • LoRA and CLIP synergy: ready for prompt tuning with minimal weight adjustments—compatible with all your favorite fine-tuned workflows.

    • Practical speed on real rigs: 20–32 steps in 15–20 s on a 4080, delivering near studio-grade results in under a minute per image.


    • Steps: 20–32 (8–12 steps work too, but sacrifices some detail).

    • FLuxGuidance: 2-4.5 (lower=more abstract, higher=more on the rails. I use 2.8 & 4.5)

    • LoRA Strategy: Start with vanilla; dial in low LoRA weights for precision tuning.

    • T5‑XXL: Use the Flan T5‑XXL for top contextual understanding.

    • CLIP L: A long-context clip L is essential. I recommend LongCLIP-GmP-ViT-L-14


    📊 Performance Snapshot (4080 GPU)

    • Cold load (no LoRA): ~1.0–1.1 s/it

    • With LoRA (warm): ~1.0–1.3 s/it

    • With LoRA (cold): ~2.0–3.5 s/it, quickly dropping after warm-up


    🎯 Ideal For

    • Content creators with mid-tier GPUs chasing FP16-level results

    • Artists and developers seeking broad style versatility and prompt fidelity

    • Workflows tight on time but unwilling to compromise on image quality


    Your best outputs fuel my motivation for this project. Upload, show off, and help me make the next one even better!

    (also accepting dataset donations, dm for requirements)

    BONUS TOOLS:

    • Tenos Discord Generation Bot: An image generation bot that uses Comfy's API and Discord's API in a workflow format that focuses on creation over configuration.

    • Flux Prompt Crafter GPT: Crafts highly imaginative and visually detailed Flux prompts.

    • Bobs Latent Optimizer for ComfyUI: This custom node for ComfyUI is designed to optimize latent generation for use with FLUX, SDXL, and SD3 modes. It provides flexible control over aspect ratios, megapixel sizes, and upscale factors, allowing users to dynamically create latents that fit specific tiling and resolution needs.

    • Bobs LoRA Loader for ComfyUI: A custom LoRA loader node for ComfyUI with advanced block-weighting controls for both SDXL and FLUX models. Features presets for common use-cases like 'Character' and 'Style', and a 'Custom' mode for fine-grained control over individual model blocks.

    Description

    Full = Official Build

    Pruned = Experimental Extra Horny Build (prone to glitches)

    this version is a full checkpoint, place it in the checkpoints folder

    FAQ

    Comments (16)

    fizixAug 21, 2024· 4 reactions
    CivitAI

    What are you prompting for the 2.5d anime style?

    BobsBlazed
    Author
    Aug 21, 2024· 2 reactions

    tbh it just sorta does it sometimes- but you can add "score_9, score_8, 2.5D anime style" and that'll do it too

    fizixAug 21, 2024

    @BobsBlazed Thanks!

    2thecurveAug 24, 2024· 3 reactions
    CivitAI

    Anyone figured a way for negative prompting with flux

    BobsBlazed
    Author
    Aug 25, 2024

    bc the t5 is interpreting the copy of your prompt as natural language putting "no [thing you dont want]" does the trick most times.

    2thecurveAug 26, 2024

    @BobsBlazed right on thanks. Love your work

    XTraitorSep 6, 2024· 3 reactions
    CivitAI

    Hope you release a GGUF soon, even on a 7900xtx while in painting. I'll typically run into out of memory issues after a few generations :(

    It been working amazing when I'm not getting that error however!

    Edit: That issue may have been resolved on my side by using quad attention

    ShowSoldierSep 7, 2024
    CivitAI

    Great job on the model! I’m curious, could you provide guidance on how to train a LoRA on it? I attempted to use the kohya scripts, but encountered an ‘unexpected_keys=[…]’ error. Additionally, I’m experiencing a NaN loss issue. Any advice would be greatly appreciated.

    BobsBlazed
    Author
    Sep 8, 2024· 1 reaction

    Unexpected keys: You can safely ignore the unexpected keys. In Kohya’s scripts, you may have to manually tweak the script. my version is quantized so the keys dont align 1:1 with Flux dev or Flux Schnell.

    NaN loss: Most likely due to high learning rates or unstable gradients. Try lowering the learning rate, adding gradient clipping, and ensuring correct data normalization. Also if you're already using FP16 - try FP8 or FP32.

    arvnoodleSep 11, 2024
    CivitAI

    can I use this for dreambooth lora anime fine tune? how do i prompt it if ever

    BobsBlazed
    Author
    Sep 11, 2024

    Yes and use natural language

    BobsBlazed
    Author
    Sep 14, 2024· 3 reactions
    CivitAI

    UPDATE 3: @jurdn helped me and was able to compile the Q8 and Q4KS GGUF's! uploading shortly!

    UPDATE 2: I fixed the issue I had mentioned with the LORAs changing the model architecture BUT I can't get the GGUF convert.py to run and my bios doesn't support virtual machines, so it looks like I'm out of luck without some help. I'm going to upload V4 today either way. The results are pretty dope ngl; outputs are more accurate to the prompt, stylized better than V3/2 & Dev/Schnell, the model also understands a wider range of subjects, concepts, artistic stylings and mediums than it did, NSFW is better than it was (still no XXX), and what is extra cool is that its been getting a 1.00-1.05s/it nearly every time from a cold load without loras (1.00-1.01s/it after) and with loras its running at ~3.25-5.45s/it from a cold load (around 1.03-1.54s/it after loading) [results on a 4080]

    sevenof9247Sep 15, 2024

    do i need vae, clip, t5xxl ? and only special one?

    BobsBlazed
    Author
    Sep 15, 2024

    @sevenof9247 yes unless you're using the all in one (AIO)

    sevenof9247Sep 15, 2024

    @BobsBlazed but in eg your V4_clipL is a model and VEA ? i use webforge , so i usual need one big model 6-11GB and depends 3 VAE / Text Encoder, BUT all all models need different sometimes NONE.

    SRY iam here over 2 years BUT FLUX is the worst if it comes to consistency and simplicity.

    can some one write an article to all these combinations BNB, NF4, FP8, FP16, DEV, S, GGUF, XXL, CLIP, VAE mixes ??!?!? :D

    BlobbaliskSep 15, 2024

    Are the GGUFs made from the fp16? Not sure there's any point to Q8_0 of an fp8

    Checkpoint
    Flux.1 D

    Details

    Downloads
    544
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/21/2024
    Updated
    5/16/2026
    Deleted
    -

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.