CivArchive
    Preview 119512081
    Preview 119512086
    Preview 119511926
    Preview 119511928
    Preview 119512116
    Preview 119512115
    Preview 119512052

    šŸ‘‹ If you like what I do and want to support the development, feel free to buy me a coffee:

    Ko-Fi


    Neural Repair & Portable Checkpoints Lora Type.

    Hello! I'm back with something much juicier than ever!

    Originally, I planned to release more Samplers (and I will), but I pivoted to solve a critical flaw I found in the community: Many popular merged checkpoints have a corrupted Layer 11.

    • This translates to:

      • āŒ Text Encoder errors (NaNs).

      • āŒ Poor LoRA compatibility.

      • āŒ Massive information loss.

    • And here's the solution: [Anti-Nans + RAM Cleaner] Uh-huh... The LoRA repair method failed, so I engineered a Runtime Fixer. Just paste the provided script into a new cell in your Colab/Notebook, run it before the WebUI, and you are good to go.

      • Herrscher Shield: Scans Layer 11 and eliminates NaNs in RAM instantly. No need to download fixed checkpoints.

      • AGGA Optimizer: Aggressively cleans RAM to prevent Colab crashes.


    Okay, now let's talk about these Loras (Total / Duplo / Ultra):

    These LoRAs function as "Structural Converters."

    Think of them as a high-end "Cosplay" for your checkpoints: they allow a lightweight model to adopt the exact visual DNA, intelligence, and stylistic precision of a massive 6GB checkpoint (like Pony or Illustrious).

    Instead of dealing with architectural instability or NaNs, I have distilled the core features of these giant models into optimized 500MB-1GB files. They let you inject the prompt-understanding and "soul" of a heavy base model into any other refined checkpoint without the overhead of downloading or loading 6GB files every time.

    • In conclusion: Now you have a tool for every need:

      • Do you just want a refined DMD? -> Total.

      • Do you want the information and style? -> Duplo.

      • Do you want it all? It'll be a Copy -> Ultra.


    🟢 TOTAL (Concept Injector):

    • What it extracts?: Text Encoders + attn2 (Cross-Attention).

      • Complete Version: Extracts Text Encoders + attn2 (Cross-Attention). It’s the "Brain" of the model.

      • Visual Only Version: Extracts only the Style (UNet). | DMD2 Pure.

    • What does it do?: Associates words with concepts.

      • Total knows that "Miku" means "Teal hair, long pigtails".

    • Result: Corrects what is drawn, but the "brushstroke" remains from your base model.

    🟔 DUPLO (Structure & Geometry):

    • What it extracts?: Text Encoders + attn2 + attn1 (Self-Attention).

    • What does it do?: Controls geometry and spatial composition. attn1 is where the "shape" of the style resides (eye size, body proportions, composition).

    • Result: The image gets the structure of the source model (e.g., Pony), but the rendering (skin, lighting) is a hybrid.

      • Best for fixing anatomy while keeping your checkpoint's texture.

    šŸ”“ ULTRA (Full Replica):

    • What it extracts?: EVERYTHING (attn, ff, proj, te).

    • What does it do?: Copies the FeedForward (FF) layers too, which determine the Render Style (lighting, line weight, shading).

    • Result: A complete conversion. The base model visually disappears and becomes a perfect replica of the source.


    āš ļø IMPORTANT VERSIONS & WARNINGS

    🟔 Visual Only vs. Complete (Zip)

    • Visual (Online Gen Friendly): Use this for quick style transfer.

    • Complete (Zip): Includes the "Fixed" files that connect text properly. Use this for serious work.

    • Note: I fixed Duplo, but the IL & NoobAI base is sensitive. Treat it with care!


    āš ļø Visual Only Usage Note:Ā Don't be scared! Even though this is based on my DMD2 architecture:

    It works perfectly atĀ HIGH STEPSĀ (20-30+) without burning (great for detailing).

    It works perfectly atĀ LOW STEPSĀ (4-8) for speed.

    Wink winkĀ šŸ˜‰


    Thanks so much for your support! ♄

    Description

    I fixed Duplo, but since it's based on il 0.1, it will corrupt any merge; use it at low steps or simply use total.

    FAQ

    Comments (17)

    Herrscher_AGGA
    Author
    Feb 1, 2026Ā· 2 reactions
    CivitAI

    I'll be posting three versions a day, but I'll focus on the basics/popular ones. If you want a LoRa version of a specific Checkpoint, just let me know.

    And one more thing: NoobAI will be the only Checkpoint with Duplo EPS and VPRED versions. I don't know which is better, so you'll see four LoRa versions for that model. Lite and Total will be based on Epsilon.

    aaUn17Feb 1, 2026Ā· 3 reactions
    CivitAI

    Looks great. I can't get it to appear in my lora list on stable diffusion A1111? Is it not compatible there or am I doing something wrong?

    Herrscher_AGGA
    Author
    Feb 1, 2026Ā· 1 reaction

    Hi, I also use A1111 and it does appear for me, although if you type "<" in the prompt it usually gives you the option to choose hidden files, although I use Google Colab so I don't know if it will work for you.

    aaUn17Feb 1, 2026Ā· 2 reactions

    @Herrscher_AGGAĀ Looks like it recognises the Lora when I just type in the <lora:DMD2_Brain_IllustriousV01_Lite:1> but I can't seem to get it showing the lora list even with refreshing. < didn't should the hidden files. Maybe I need to restart to see if it shows. But it's a noticeable difference. Thank you for the Lora!

    Herrscher_AGGA
    Author
    Feb 1, 2026

    @aaUn17Ā Thanks for using it. By the way, the Duplo version is almost a complete checkpoint, so you don't need to run it at full power; only Lite needs it. I also recommend using the --fp8 argument if you have limited VRAM.

    SexiamFeb 1, 2026Ā· 5 reactions
    CivitAI

    Thanks for all the work. Which tool do you use to figure out which layer is causing issues with a merge?

    Herrscher_AGGA
    Author
    Feb 1, 2026

    Hi, I use Colab. I used to use another script, but after testing it on 20 popular checkpoints like YiffyMIX, I realized that layer 11 always gets corrupted. : import torch

    from safetensors.torch import load_file

    modelos = [

    "/content/drive/MyDrive/qupdt1/HerrscherAGGA2025_UTOPIA-XL_V5-ALPHA.fp16.safetensors",

    ]

    for m_path in modelos:

    print(f"Comprobando {m_path.split('/')[-1]}...")

    try:

    weights = load_file(m_path)

    bad = [k for k in weights.keys() if "layers.11" in k and (torch.isnan(weights[k]).any() or torch.isinf(weights[k]).any())]

    if bad:

    print(f" āŒ It has {len(bad)} corrupted tensors in layer 11.")

    else:

    print(" āœ… It's clean.")

    except:

    print(" āš ļø It could not be loaded.")

    ---------------------------------------------------------------------------------------------------

    simple solution:

    import torch

    from safetensors.torch import load_file, save_file

    import gc

    # 1. ROUTES

    # We will use Illustrious/NoobAI as a donor because we have confirmed that it is 100% clean.

    ruta_master_con_nans = "/content/HerrscherAGGA2025_Velvette-XL_V4_Noblesse_FOCUS.safetensors"

    ruta_donante_sano = "/content/noobaiXLNAIXL_epsilonPred11Version.safetensors"

    ruta_salida_parcheada = "/content/HerrscherAGGA2025_Velvette-XL_V4_Noblesse_FOCUS_PATCHED.safetensors"

    print("🩹Initiating surgical patching of Layer 11...")

    # 2. Load models

    master = load_file(ruta_master_con_nans)

    sano = load_file(ruta_donante_sano)

    # 3. Identify and replace only the broken tensioners in Layer 11

    parches = 0

    # We specifically looked at the CLIP architecture (conditioner)

    keys_a_revisar = [k for k in master.keys() if "layers.11" in k]

    for key in keys_a_revisar:

    if torch.isnan(master[key]).any() or torch.isinf(master[key]).any():

    print(f"šŸ”„ Repairing an infected tensioner: {key}")

    if key in sano:

    # Direct replacement for the sound value of Illustrious / NoobAI

    master[key] = sano[key].to(master[key].dtype)

    parches += 1

    else:

    # If it's not in the donor (unlikely), we clean with zeros.

    master[key] = torch.nan_to_num(master[key], nan=0.0)

    parches += 1

    # 4. Final verification (Safety cleaning of the entire model)

    for key in master.keys():

    if torch.isnan(master[key]).any() or torch.isinf(master[key]).any():

    master[key] = torch.nan_to_num(master[key], nan=0.0)

    # 5. Save

    print(f"\nāœ… Surgery completed. Specific {patches] were applied.")

    save_file(master, ruta_salida_parcheada)

    print(f"šŸ’¾ Save on: {ruta_salida_parcheada}")

    # Cleaning

    of the master, healthy

    gc.collect()

    SexiamFeb 1, 2026Ā· 1 reaction

    @Herrscher_AGGAĀ Interesting. What do you think is causing that layer to get damaged during the merge process?

    Herrscher_AGGA
    Author
    Feb 1, 2026

    @SexiamĀ The problem with merging is that NaNs are infectious. In math, Normal Value + NaN = NaN. So, if you merge 10 models and just one of them has a fried Layer 11, your entire resulting merge will inherit that corruption. That's why checking and patching it with a clean donor is crucial.

    Herrscher_AGGA
    Author
    Feb 1, 2026

    @SexiamĀ Also, it's usually a math issue called [[Arithmetic Overflow]] during the calculation.

    In aggressive finetunes, Layer 11 tends to have extremely high parameter values because it acts as the bottleneck for the skipped Layer 12. It's basically a cup filled to the brim.

    When you merge two models (especially using methods like 'Add Difference'), you are adding those values together (A + B). Since Layer 11 is already near the max limit of the fp16 format, adding even a small value pushes it over the edge. The number becomes too big for the file format, and it instantly turns into a NaN (corrupted tensors in layer 11).

    That's why patching involves replacing it with a 'calmer' layer (like NoobAI's) that has lower values, keeping the math within safe limits.

    SexiamFeb 1, 2026Ā· 1 reaction

    @Herrscher_AGGAĀ Would outputing the models at fp32 help with that overflow issue?

    Herrscher_AGGA
    Author
    Feb 2, 2026Ā· 1 reaction

    @Sexiam Aha!, Switching to fp32 absolutely solves the immediate math overflow because the limit is virtually infinite compared to fp16.

    Since Layer 11 is exactly where values tend to explode, fp32 technically "fixes" the crash during the merge process because it can handle those massive numbers.

    However, you're just kicking the can down the road. The underlying issue is that Layer 11 is still outputting values way above the fp16 safe zone. So if you go the fp32 route, you must clamp the values before saving. Otherwise, the moment you convert back to fp16 for distribution, those huge values will hit the limit again and turn right back into NaNs.

    aaUn17Feb 2, 2026Ā· 1 reaction

    @Herrscher_AGGAĀ I wish I understood the underworkings of how this stuff works this well! >.<

    Herrscher_AGGA
    Author
    Feb 2, 2026Ā· 3 reactions

    @aaUn17Ā Hi, it's simple. A checkpoint is a zip file, and this zip file contains folders for reading, style, and attachment. The style is called CLIP (this includes CLIP L and OpenCLIP G).

    Think of it this way:

    CLIP: This is what reads your prompt ("1 girl, hat..."). It has several layers; imagine they are pages of an instruction manual. The famous Layer 11 is almost the last page. If that page has ink smudges (NaNs/Errors), the model gets confused right at the end and doesn't know what to draw.

    UNet: This is what actually paints the image based on what the brain (clip) told it.

    VAE: This converts the math into the final image you see.

    What I do with my script is simply tear that broken "page 11" out of the Brain and paste a new, clean page from another book (NoobAI) onto it. It's that simple! It's not black magic, it's just replacing a faulty part. šŸ˜‰

    Herrscher_AGGA
    Author
    Feb 1, 2026Ā· 2 reactions
    CivitAI

    āš ļø A1111 USERS: IF THE CARD/LORA DOES NOT APPEAR. Since this is a LoRa version with convolutions, similar to LoCON/LyCORIS, Automatic1111 might hide it from the interface list if it's not supported. There are solutions for this:

    Manual Trigger: Even though it's invisible, you can always use it by typing <lora:filename:1>.

    ----------------> OR <----------------

    Make sure you have the a1111-sd-webui-lycoris extension installed and update to the latest version of A1111 (v1.9 or higher has better native support).

    barbecue420Feb 2, 2026

    I get a bunch of errors like this on reforged when using any of these:

    WARNING:root:lora key not loaded: lora_te1_text_model_text_model.encoder.layers.0.self_attn.k_proj.alpha

    WARNING:root:lora key not loaded: lora_te1_text_model_text_model.encoder.layers.0.self_attn.k_proj.lora_down.weight

    WARNING:root:lora key not loaded: lora_te1_text_model_text_model.encoder.layers.0.self_attn.k_proj.lora_up.weight

    The image does change but since they all look exactly the same between the lite or duo versions I'm guessing it's not actually working.

    Herrscher_AGGA
    Author
    Feb 3, 2026

    @barbecue420Ā Thanks for letting me know, give me a few minutes to fix them. I'm back, but I have to go out. I've already done most of them, but I'll upload them tomorrow.

    Sincerely, thank you; I hadn't noticed that part and ended up creating a DMD2 filter. You'll have them all tomorrow; Total and Lite are working as they should.

    LORA
    Illustrious

    Details

    Downloads
    203
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/1/2026
    Updated
    5/3/2026
    Deleted
    -

    Files

    HerrscherAGGA2026_DMD2_Brain_IllustriousV01_DUPLO.safetensors

    HerrscherAGGA2026_DMD2_Brain_IllustriousV01_DUPLO_FIXED.safetensors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.