CivArchive
    ChadMix -- NoobAI; IllustriousXL - v2.0
    NSFW
    Preview 38289107
    Preview 38289212
    Preview 38291009
    Preview 38292712
    Preview 38293972
    Preview 38295852
    Preview 38298772
    Preview 38359439

    V2.5 based on Noob VPred 0.5 & IterComp perpendicular merge.

    I recommend Euler A at 5.5 CFG, Align Your Steps 11, 30 steps. Or use Euler A CFG++ at 1.6 - 2 CFG if you have it.

    Trained with V-Prediction and ZTSNR, you may have to enable these options manually.

    I updated the safetensors and tested that it automatically applies v-pred & ztsnr on the latest Forge, ReForge, and ComfyUI. If you still have compatibility problems with the new file, first try updating your program, then try the options below:

    For ReForge: enable Advanced Model Sampling, Discrete mode, v_prediction, check Zero SNR

    For Forge: Enable Zero Terminal SNR in settings. Create a .yaml text file right next to the model file with the same name as the model file, with this code:

    model:
      params:
        parameterization: "v"

    For ComfyUI: Use the ModelSamplingDiscrete node w/ v_prediction and zsnr.

    I'm personally using DPM++ 3M SDE CFG++, available here for ComfyUI along with Align Your Steps, 30 steps, and Kohya Deep Shrink (block 3, 1.5 scale equal to my scale factor for gen resolution, 0.4 end_percent). However the last step is Euler A which cleans up the output nicely.

    Here's a copy of my full ComfyUI workflow. It's messy and does some weird things.

    This all started when I realized my meme Gigachad lora was indeed enhancing contrast and character details in many cases and stabilizing posing. With more testing, I determined the effect was inconsistent and often cooked fine detail into noise, reduced full-frame coherency especially in wide shots, and dragged traditional media into photorealism. Thus I balanced the Gigachad dataset with a collection of Danbooru's oil painting medium toplist, a selection of regular anime pics, and even one of my own personal photographs. And I thought that 0.75 was ripe for an aesthetic tune.

    1.0 has had a total of 29 epochs of training, 14 at moderate LR (hand picked out of a 20 epoch session) and then an extra 15 at low LR. I picked NoobIEater 2.0 by Jyrrata as the base checkpoint. The IterComp merges are a promising concept, I wanted to preserve the Noob 0.75 text encoder, and it looked like Jyrrata had done good legwork testing the best merge out of those.

    The results were actually a lot better than I expected. None of the showcase images have been inpainted or upscaled, they were all generated directly at the resolution you see with the help of Kohya Deep Shrink. The model is playing nice with my 0.5 PPK lora, actually getting better results from the first seed than I ever did out of vanilla 0.5.

    Gigachad meme is part of the training set, with trigger phrases "the chad smile", "the serious gigachad", "sitting gigachad lean".

    Description

    1.0 noob base

    FAQ

    Checkpoint
    Illustrious

    Details

    Downloads
    161
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    4/24/2025
    Updated
    4/24/2025
    Deleted
    4/24/2025
    Trigger Words:
    the chad smile
    the serious gigachad
    sitting gigachad lean

    Files

    chadmixNoobai_v20.safetensors

    Mirrors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.