AstolfoCarmix-VPredXL : AstolfoKarmix - Karmix-XL + VPred Models
An experimental merge for mating 2 different base model VPred Models.
Discord: "Good luck".
Specification
Merge log / receipe: e2e script, log. 11 models + 1 selecetd for VAE.
Algorithm used: "Read from bottom". "Modded DELLA". Parameter in e2e script.
(Measured in L2 norm)
87.5%100% NoobAI vpred and finetunes / merges..
How to use
NoobAI vpred: You know the drill.
SDXL Vpred in A1111: Download this yaml and rename as
*.yaml.
Description
Based from AK-Evo 2EP, first 57EP of 6k Astolfo Dataset, then 0.5EP of full 12.4M dataset, finally merge base model with ratio 0.82.
The idea is similar to the previous version (0.55), but the dataset alignment is confirmed.
Planned for 0.2EP only, but eventually it hits another 0.5EP. Next will be RF.
For ComfyUI users, use this workflow for workaround. Otherwise just head to my non Evo 2.5EP (ratio 1.00) with full contents.