The LoRa was trained on just two single transformer blocks with a rank of 32, which allows for such a small file size to be achieved without any loss of quality.
Since the LoRa is applied to only two blocks, it is less prone to bleeding effects. Many thanks to 42Lux for their support.
Description
Details
Downloads
195
Platform
SeaArt
Platform Status
Available
Created
9/4/2024
Updated
9/4/2024
Deleted
-
Trigger Words:
Twiggy









