V2:
I retrained it on a bigger dataset with sligtly different settings and lower rank. It doesn't destroy the picture that much when higher lora weight is used and works better with shorter prompts.
V1:
Trained on 90 pictures for 100 epochs. Works best at 0.5-0.7 strength, after that the output becomes fuzzy. It seems to prefer lenghty and detailed prompts.
Description
I retrained it on a bigger dataset with sligtly different settings and lower rank. It doesn't destroy the picture that much when higher lora weight is used and works better with shorter prompts.
FAQ
Comments (4)
doesnt work with flux-dev. on comfy just spits errors
lora key not loaded: lora_transformer_transformer_blocks_9_norm1_context_linear.alpha
lora key not loaded: lora_transformer_transformer_blocks_9_norm1_context_linear.lora_down.weight
lora key not loaded: lora_transformer_transformer_blocks_9_norm1_context_linear.lora_up.weight
lora key not loaded: lora_transformer_transformer_blocks_9_norm1_linear.alpha
lora key not loaded: lora_transformer_transformer_blocks_9_norm1_linear.lora_down.weight
lora key not loaded: lora_transformer_transformer_blocks_9_norm1_linear.lora_up.weight
lora key not loaded: lora_transformer_x_embedder.alpha
lora key not loaded: lora_transformer_x_embedder.lora_down.weight
lora key not loaded: lora_transformer_x_embedder.lora_up.weight
Thats weird, because I generated everything here using comfy. Whats your comfy commit you are using? Mine is 5cbaa9e07c97296b536f240688f5a19300ecf30d. Are you using gguf or safetensors checkpoint? I'm using gguf one
I downloaded the lora from civit to assert that its not broken and used it with both gguf and safetensors flux dev and it works. I think that you have to update your comfy, at leats past this commit: https://github.com/comfyanonymous/ComfyUI/commit/d043997d30d91ab057f770d3396c2e288e37b38a, as it was trained using onetrainer and support for such loras was introduced in this commit to comfy
Works on my comfy. Make sure your nodes are updated.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.








