RDBT PoC
Contains standalone impls from RDBT model. Mainly for prove-of-concept.
By default they were trained on pretrained anima, and works on any derivative model. (Except RDBT model probably, overlapped training target).
Stabilizer
This LoRA is from reinforcement learning from itself. It can reduce "noise" in sampling process, improve output stability, accuracy and quality.
There is no training data, this model knows nothing, thus won't change anything else (styles, characters, etc.).
Sharing merges using this distilled model not allowed.
===
Recommended settings:
strength 0.5~1.
lower cfg scale: 2~4
===
Note for "merged" base model users: If you are experiencing oversaturation issues: I tested recently uploaded "merged" models. Most of them (7/8) already merged distilled models. You can't over stack distilled LoRAs like style LoRAs. They work in different ways. You need to lower the LoRA strength, or switch to a "trained" base model, which has clean model status.
Description
FAQ
Comments (3)
How does this affects diversity? Also, are you supposed to use this only alongside rdbt base models/anima 3+rdbt lora or does it work on raw anima/merged or trained variants?
There is no step distillation/dmd2.
It works on all "trained", non-distilled models. Merged base model might have problems because they already merged distilled model.
And probably shouldn't use on rdbt models, because they have overlaps.
what a w/o?




