V2 update is retrain on the latest artist arts with better settings and cleaned dataset, could give some good backgrounds and should perform better overall
Description
FAQ
Comments (6)
Thank you so much
Thank you, but it seems like the rank32 versions don't work with huggingface diffusers. I use your 40hara and ke-ta loras, which do work. Can you consider adding other versions as well?
Sure, I'll add full version a little later. Didn't knew it could give such an issue with resized one, why could it be?
@bakariso I don't know much about it, this is the error message:
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
size mismatch for text_model.encoder.layers.11.self_attn.k_proj.lora_A.kantoku.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([32, 768]).
size mismatch for text_model.encoder.layers.11.self_attn.k_proj.lora_B.kantoku.weight: copying a param with shape torch.Size([768, 1]) from checkpoint, the shape in current model is torch.Size([768, 32]).
size mismatch for text_model.encoder.layers.11.self_attn.v_proj.lora_A.kantoku.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([32, 768]).
size mismatch for text_model.encoder.layers.11.self_attn.v_proj.lora_B.kantoku.weight: copying a param with shape torch.Size([768, 1]) from checkpoint, the shape in current model is torch.Size([768, 32]).
size mismatch for text_model.encoder.layers.11.self_attn.q_proj.lora_A.kantoku.weight: copying a param with shape torch.Size([1, 768]) from checkpoint, the shape in current model is torch.Size([32, 768]).
size mismatch for text_model.encoder.layers.11.self_attn.q_proj.lora_B.kantoku.weight: copying a param with shape torch.Size([768, 1]) from checkpoint, the shape in current model is torch.Size([768, 32]).
@mycivitaiaccount Try unresized version
@bakariso It works, thank you.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
