The LORA extracted from the checkpoint at https://civarchive.com/models/772865?modelVersionId=864441
The extracted SCHNELL version of Lora significantly differs from the base model, and I'm unsure why, even with dim640 (the model weighs 11.4GB in fp32). The results are closer to dim128 than to the base model.
Description
FAQ
Comments (6)
nice, thanks. request for the schnell version too..?
I'm trying to extract Lora, but for some reason, it's coming out too overcooked and not resembling the original on dim128 and dim256 (with no significant difference).
agflux_schnell - Imgsli
@egordubrovskiy9112843 I see you posted a Schnell but from the comment it sounds like you used the pruned version. You always need to do like to like extraction (ie the model as trained vs original model, but on top of that, using pruned/distilled models means the weight information might be already 'fragile' (think the difference between Tell me like I'm 5, and College Level)
How does your Dev lora work on Schnell, compared to the Schnell checkpoint? I have pretty good luck with using Schnell with Dev loras. (those who are license concerned are the only ones who might not want to do this, and it's a legal limbo there)
@scruffynerf I tried to extract the Lora, but the base checkpoint was fp16, not fp8. The result is the same.
AGFlux - Imgsli
Nice, we need more of these... reducing an 11gb checkpoint to 600mb and keeping 95% of the look means a HUGE saving on both diskspace as well as loading time, abiliity to combo things, etc etc.
There are 180+ Flux checkpoints, and 150 of them deserve this sort of treatment.
What tool did you use?
used the kohya-ss script from the SD3 branch.
https://github.com/kohya-ss/sd-scripts/blob/sd3/networks/flux_extract_lora.py
networks\flux_extract_lora.py --save_precision float --model_org "BASE_MODEL" --model_tuned "FINETUNED" --dim 256 --device cuda --clamp_quantile 1.0 --save_to "LORA_OUT"
Then lora resize script https://github.com/kohya-ss/sd-scripts/blob/sd3/networks/resize_lora.py









