CivArchive
    Preview 120554404
    Preview 120554643
    Preview 120554449
    Preview 120554450
    Preview 120554519
    Preview 120554554
    Preview 120554669
    Preview 120554717
    Preview 120554817

    šŸ‘‹ If you like what I do and want to support the development, feel free to buy me a coffee:

    Ko-Fi


    Neural Repair & Portable Checkpoints Lora Type.

    Hello! I'm back with something much juicier than ever!

    Originally, I planned to release more Samplers (and I will), but I pivoted to solve a critical flaw I found in the community: Many popular merged checkpoints have a corrupted Layer 11.

    • This translates to:

      • āŒ Text Encoder errors (NaNs).

      • āŒ Poor LoRA compatibility.

      • āŒ Massive information loss.

    • And here's the solution: [Anti-Nans + RAM Cleaner] Uh-huh... The LoRA repair method failed, so I engineered a Runtime Fixer. Just paste the provided script into a new cell in your Colab/Notebook, run it before the WebUI, and you are good to go.

      • Herrscher Shield: Scans Layer 11 and eliminates NaNs in RAM instantly. No need to download fixed checkpoints.

      • AGGA Optimizer: Aggressively cleans RAM to prevent Colab crashes.


    Okay, now let's talk about these Loras (Total / Duplo / Ultra):

    These LoRAs function as "Structural Converters."

    Think of them as a high-end "Cosplay" for your checkpoints: they allow a lightweight model to adopt the exact visual DNA, intelligence, and stylistic precision of a massive 6GB checkpoint (like Pony or Illustrious).

    Instead of dealing with architectural instability or NaNs, I have distilled the core features of these giant models into optimized 500MB-1GB files. They let you inject the prompt-understanding and "soul" of a heavy base model into any other refined checkpoint without the overhead of downloading or loading 6GB files every time.

    • In conclusion: Now you have a tool for every need:

      • Do you just want a refined DMD? -> Total.

      • Do you want the information and style? -> Duplo.

      • Do you want it all? It'll be a Copy -> Ultra.


    🟢 TOTAL (Concept Injector):

    • What it extracts?: Text Encoders + attn2 (Cross-Attention).

      • Complete Version: Extracts Text Encoders + attn2 (Cross-Attention). It’s the "Brain" of the model.

      • Visual Only Version: Extracts only the Style (UNet). | DMD2 Pure.

    • What does it do?: Associates words with concepts.

      • Total knows that "Miku" means "Teal hair, long pigtails".

    • Result: Corrects what is drawn, but the "brushstroke" remains from your base model.

    🟔 DUPLO (Structure & Geometry):

    • What it extracts?: Text Encoders + attn2 + attn1 (Self-Attention).

    • What does it do?: Controls geometry and spatial composition. attn1 is where the "shape" of the style resides (eye size, body proportions, composition).

    • Result: The image gets the structure of the source model (e.g., Pony), but the rendering (skin, lighting) is a hybrid.

      • Best for fixing anatomy while keeping your checkpoint's texture.

    šŸ”“ ULTRA (Full Replica):

    • What it extracts?: EVERYTHING (attn, ff, proj, te).

    • What does it do?: Copies the FeedForward (FF) layers too, which determine the Render Style (lighting, line weight, shading).

    • Result: A complete conversion. The base model visually disappears and becomes a perfect replica of the source.


    āš ļø IMPORTANT VERSIONS & WARNINGS

    🟔 Visual Only vs. Complete (Zip)

    • Visual (Online Gen Friendly): Use this for quick style transfer.

    • Complete (Zip): Includes the "Fixed" files that connect text properly. Use this for serious work.

    • Note: I fixed Duplo, but the IL & NoobAI base is sensitive. Treat it with care!


    āš ļø Visual Only Usage Note:Ā Don't be scared! Even though this is based on my DMD2 architecture:

    It works perfectly atĀ HIGH STEPSĀ (20-30+) without burning (great for detailing).

    It works perfectly atĀ LOW STEPSĀ (4-8) for speed.

    Wink winkĀ šŸ˜‰


    Thanks so much for your support! ♄

    Description

    Hi, I'm back with pixel art! This time I used a specific pixel art checkpoint as a base (Illustrious "PixelArt" from HaDeS), but I adapted it to my own workflow.

    The main challenge is that Velvette doesn't do pixel art natively (I think that's the simplest proof that this method actually works). It was tricky because my previous workflow kept the DMD2 base at 100% (which translates to three LoRAs being way too strong for specific concept/style mini-checkpoints). However, just when I was about to give up, I decided to reduce the DMD2 strength to 70% (just like I did with Voxel). This made the LoRA work great at 16-18 steps, allowing for higher CFGs, but I wanted to achieve the same result with fewer steps without losing that intensity.

    So, I started playing around with the math of the Up and Down vectors. I simply multiplied them directly, since "cooking" (re-training) an extraction taken from a checkpoint only introduces noise (I already tried that, and it just doesn't work the same).

    In short: it was a really fun experiment, and now I have a new tool capable of slashing generation time from 16 steps down to just 6 steps for pixel art. It used to be complicated to pull off, but we are improving. Hope you enjoy it! xD