Just a small batch file to resize .safetensors loras in bulk.
The resize logic and its dependencies are taken from https://github.com/bmaltais/kohya_ss, i only added the batch file calling the python script.
Usage:
2.01: i dun goofed. ok, here is v2.0 in bug-free working condition.
2.0: Extract the archive to a location of your choice and place all files you want to get resized in either the big, med or small subfolder. Execute res-lora.bat. The conversion uses the suggested settings of firemanbrakeneck: big = sv 0.94 / rank 32, med = 0.92 / 24, small = 0.9 / 16.
1.0: Extract the archive to a location of your choice and place all files you want to get resized in the res-lora folder. Execute res-lora.bat. The conversion uses the default settings of kohya_ss' lora resizer: fp16 precision, cuda, method sv_fro with a value of 0.9, except for the rank, where i use 64, and places the resized files in the resized subfolder. all these settings can be changed in the batch call, together with the targeted file extension (in case you got loras in another format).
Known Issues:
Not all loras can be converted, my best guess is that newer types are not supported, yet.
if it cannot find your python.exe make sure its path was added to the PATH system variable of windows correctly.
Description
now in working!
FAQ
Comments (9)
Are there ways of making test scripts that "fixes" 128 dimensional LoRAs into smaller forms and see how the quality is effected (e.g. A1111 extensions or ComfyUI testing pipelines)?
you could adjust this script to export not only one smaller form, but several, and then test it with an x/y/z plot against each other + the original
Hey @eurotaku, so I was brainstorming in another comment (https://civitai.com/models/274396?modelVersionId=309303&dialog=commentThread&commentId=340378) and would appreciate your thoughts on the matter.
Basically, like videos are offered in various formats in many media websites, heavy loras could be automatically compressed by the server and alternatives offered for download: Something like if the lora is between some thresholds (say 50 - 200mb, 200mb - 500mb, 500mb+), try to compress it using one of the presets, high - med - low quality respectively (heavier loras start with more drastic compression); if it reaches a target retention threshold, eg 90-95%, stop, otherwise try a higher preset. Upload any compressed files alongside the source model, possibly with the retention value for the user's information, though of course the quality in practice is what matters.
In terms of the load, I think it's not too bad for 1.5, XL could be a problem but there are relatively few loras there. Perhaps make it an on demand service on par with generation.
I know there are some standard venues for suggestions, but I'm more the one on one type.
yeah, there are many ideas like this, e.g. automatic pickletensor to safetensor conversion or automatic fp32/16 variants. but sadly very low on the prio list.
I'm sure "big = sv 0.94 / rank 32" might mean instantly something to some, certainly doesn't to me. What am I supposed to be lookin'at in the Extract LoRA's tab to test ... "sv" and which rank you're talking about ? Network, conv ??
I was also interested, so I asked chatgpt (to someone knowledgeable, please correct if wrong): SV refers specifically to the singular values of a matrix. The rank of a matrix refers to the number of linearly independent rows or columns in the matrix. In other words, it tells you how many dimensions the data spans in its vector space. A matrix with full rank has linearly independent rows and columns, while a matrix with lower rank has some rows or columns that can be written as linear combinations of others.
for Linux script: run bash res-lora.sh after issuing in the installation dir:
sed 's/\r//;s/\.exe//;s/\\/\//g;s/%%A IN (/f in /;s/) do (/; do/;s/%%~A/$f/g;s/)/done/;s/pause//;s/cd\./cd ./' res-lora.bat > res-lora.sh
Thanks, this works great but requires to install some missing modules.
Thanks for the guide! I also had to install einops as well as those listed in the guide.
