CivArchive
    Preview 1Preview 2Preview 3Preview 4Preview 5Preview 6

    The LHC (Large Heap o' Chuubas) is aiming to be the model for all of your VTuber needs. There are other minor goals, like improving aesthetics, backgrounds and ???????, but the main goal is to offer a lora-less option for generating VTubers.

    Alpha V0.3.1

    Due to some mistakes during the training of alpha v0.3, the model has diverged significantly from NoobAI. Nonetheless, it is a capable model with good understanding of most of the 79 trained vtubers and a passable one for the rest. For an overview refer to:

    https://huggingface.co/Jyrrata/LHC_XL/blob/main/characters/alpha03.txt

    and https://civitai.com/posts/9579061 for a visual guide on basic character comprehension of the two v0.3 models. Many characters work with only their activation tag, though some require a little or a lot of additional tags to work.

    Alpha V0.3 and V0.3.1 were trained on NoobAI-XL V-Pred-0.6 version.

    If you want to use V0.3, it can be found here: https://huggingface.co/Jyrrata/LHC_XL/blob/main/alpha/v03/LHC_alphav03-vpred.safetensors

    Additionally, there is also an eps version and a version trained on rouwei-vpred of intermediate datasets in that huggingface repo. Refer to the character .txt files for overview of the v0.2.5 knowledge.

    Alpha V0.2

    Same general approach as v0.1, however the dataset has been expanded by 10 additional vtubers for a total of 28 now, and the final two epochs include an experimental dataset of 1200 images covering a wide base of concepts intended to realign and improve the model aesthetically.

    Included vtubers this time are:

    • aradia ravencroft

    • bon \(vtuber\)

    • coni confetti

    • dizzy dokuro

    • dooby \(vtuber\)

    • haruka karibu

    • juniper actias

    • kogenei niko

    • malpha ravencroft

    • mamarissa

    • michi mochievee

    • rindo chihaya

    • rin penrose

    • atlas anarchy

    • dr.nova\(e\)

    • eimi isami

    • isaki riona

    • jaiden animations

    • juna unagi

    • kikirara vivi

    • mizumiya su

    • trickywi

    • tsukinoki tirol

    • alias nono

    • biscotti \(vtuber\)

    • mono monet

    • rem kanashibari

    • yumi the witch

    In addition to adding new ones, the datasets for some of the old ones have been redone, especially trickywi, juna unagi and juniper actias. Juniper has also gotten two new tags, juniper actias \(new design\) and juniper actias \(old design\), which tries to seperate her models into two distinct phases. This is experimental and might not be carried forward to future versions.

    A showcase of the base character tag understanding is here. Some vtubers don't work with only their character tag, instead you will need additional descriptive tags.

    Alpha V0.1

    This model is currently still in alpha. The current state is not indicative of all future capabilities, but rather just a proof of concept.

    A basic test model, with nice results nonetheless. Trained on roughly 1000 images featuring mostly 18 vtubers that the base NoobAI model did not know well. This model is based on the NoobAIXL v-pred-0.5-version model.

    As a V-pred model, this model will not work in all WebUIs, but only those that have implemented vpred sampling. The necessary state dicts of the model have been set for UIs like Comfy and ReForge to set the required settings automatically. If not, it is necessary to activate v-pred sampling and it is recommended to turn on ztsnr as well.

    The newly added/enhanced vtubers are (listed by their trained tags):

    • Aradia Ravencroft

    • Malpha Ravencroft

    • Mamarissa

    • Koganei Niko

    • Rindo Chihaya

    • Mizumiya Su

    • Isaki Riona

    • Kikirara Vivi

    • Coni Confetti

    • Dizzy Dokuro

    • Dooby (Vtuber)

    • Haruka Karibu

    • Juna Unagi

    • Juniper Actias

    • Michi Mochievee

    • Rin Penrose

    • Trickywi

    • Jaiden Animations

    Additionally included were especially Nerissa Ravencroft and Vienna (Vtuber), as well as many images featuring 2 or more characters at once.

    For a showcase of the base character comprehension, check out this post.

    Sampler: Euler

    CFG: 4-5

    Steps: 25+

    Training Details:

    Trained as a full dimension LoKr, based on the methodology of the KohakuXL series, with the Lycoris settings found here.

    Specific parameters:

    • Dataset: 1035 images

    • Batchsize: 2

    • Gradient Accumulation: 4

    • Training steps: ~6400

    • Training Epochs: ~50

    • Unet LR: 3e-5 (lowered to 2e-5 for the last 12 epochs)

    • TE LR: 2e-5 (lowered to 1e-5 for the last 12 epochs)

    • Optimizer: AdamW 8-bit

    • Constant scheduler

    Special Thanks:

    kblueleaf (Kohaku Blueleaf): for the Lycoris library and the resources on finetuning via LoKr

    OnomaAI & Laxhar Dream Lab: for amazing base models

    kohya-ss: for sd-scripts

    Description