Styles are trained on synthetic data and do not copy the styles of artists!
Photo 2 - is trained on real data (Dataset from Unsplash)
[M] Merged LoRAs
Recommended LoRA Strength (Weight): 0.8-1
Preview generated without using ADetailer & Hires. fix
Description
Cold Oil + Gothic Art Sharp = Cold Oil Gothic
Gothic Art Sharp is the same as Gothic Art only sharper, even too sharper for this reason and not published
FAQ
Comments (15)
gothic ones are my favorite!!
Thank you! ❤️ My favorite styles Smooth Anime 1-2, Concept Art Twilight, Oil Gothic/Cold Oil Gothic, Summer Days, Rainbow, Cold Night
I'm inspired by your lora styles and would love to make my own. Do you have any guides or suggested resources to create style loras like this?
From the tips I can give you.
ㅤ
Dataset
Collect a good quality dataset with no JPEG artifacts or blurring, image resolution should be 1024 or 1152 pixels on the long side for SDXL or at least 960 or 768 pixels, for SD v1.5 512,768 or 960 pixels on the long side.
ㅤ
Captions
Captions should accurately describe what is in the image
I use WD14 Captioning (SmilingWolf/wd-v1-4-moat-tagger-v2)
WD14: Captioning: 1girl, holding, holding cup, coffee cup, cafe, sitting, full body
BLIP: a girl is sitting in a cafe with a cup of coffee in her hands
ㅤ
Training
Training I am training locally on my computer with Kohya_ss
I have a folder called Lora Training Data inside the folder there are other folders named image, log, model.
ㅤ
image folder in which there is another folder 20_foldername inside you need to put images for training.
log is just log data
model folder where the model will be after training
ㅤ
20_foldername the number before the folder name means how many times one image will be repeated.
20 Images 20 Repeats 10 Epoch = 4000 Training steps
Batch size 1: 20 x20 x10 = 4000
Batch size 2: 20 x20 x10 ÷ 2 = 2000
Batch size means how many images it will be trained with at a time
Epoch means how many times it will be trained.
ㅤ
The size of the network affects the file size and not much on the data.
the smaller the size the less LoRA will remember, this does not mean that a 50Mb file is worse than a 100-200Mb file.
ㅤ
200Mb Dim 32 Alpha 16
100Mb Dim 16 Alpha 8
50Mb Dim 8 Alpha 4
ㅤ
I use these network sizes
Network Rank (Dimension) 32
Network Alpha 16
ㅤ
I can also provide a link to the KohyaSS Configuration file
@prgfrg23 exccellent guide. Have you ever thought about using a larger image dataset and produce a ccheckpoint instead of a lora?
@Lady_Valeria Training Photo 2 3000 steps I waited for like 3 hours so I don't think I will even try to train Checkpoint I don't even know how long it would take a week? two weeks? I have an RTX 3060 12GB.
ㅤ
LoRA is easier and faster to train than Checkpoint, they weigh much less and even those 200-400Mb weigh less than any Checkpoint, One SDXL model weighs almost as much as 30 LoRA that weigh 200Mb and if you reduce the weight to 50-100Mb there are even more (LoRA that weigh 800Mb or 1.5Gb is beyond my comprehension).
ㅤ
My SD models folder weighs almost 2TB of which 1TB is Checkpoints.
@prgfrg23 did you train on the turbo dpo merge
@julianarestrepo No, just V6 (Start With This One)
@prgfrg23 Can you please provide a link to the KohyaSS Configuration file ?
@miys Link to KohyaSS Configuration file
https://drive.google.com/file/d/1fSHqs77gby4rodKwZmbhKnbVx0SyrVOm/view?usp=drive_link
It's mostly just one of the SDXL presets with some minor modifications, And to be honest I'm not even sure if the settings are correct, But from my page I think you can see that the results are very good
@prgfrg23 Thank you
My eye literally twitches when I see what My LoRAs use with other SDXL models, These LoRAs are trained on the Pony Diffusion V6 XL and only work well with that model
ㅤ
Even with AutismMix SDXL and Js2Prony these styles do not work as they should these models already have a style, My styles cannot overlap the LoRA styles in these models they can only blend together
ㅤ
More questions for those people who use LoRA and Embeddings from SD v1.5 (v1.5 and XL are not compatible they are different architectures) Yes there is X-Adapter but it is not yet in A1111 & SD WebUI Forge As far as I know.
ㅤ
SDXL does not understand LoRAs trained on Pony and Pony does not understand LoRAs trained on SDXL (Pony is also an XL model but trained differently and is practically not compatible with Base SDXL) Almost all other models are similar to Base SDXL and Pony is not.
ㅤ
My LoRAs are labeled as SDXL because they didn't work in Civiti's image generator when they were labeled as Pony
I don't even know whether to change the Base Model back to Pony.
I understand that people don't have to understand all this, I just don't understand why people see Styles for Pony Diffusion V6 XL for some reason people don't use styles with the Pony Diffusion V6 XL model.
I seem to have the best results with Photo 2 with DPM++ 3M SDE
It seems to me that DPM++ 2M SDE Karras a little better in small details, yes faces are worse but on DPM++ 3M SDE faces too but very good without ADetailer or Inpainting, It is better not to look at the faces
Details
Files
Cold Oil Gothic Style SDXL_LoRA_Pony Diffusion V6 XL.safetensors
Mirrors
Cold Oil Gothic Style SDXL_LoRA_Pony Diffusion V6 XL.safetensors
Cold Oil Gothic Style SDXL_LoRA_Pony Diffusion V6 XL.safetensors
Cold Oil Gothic Style SDXL_LoRA_Pony Diffusion V6 XL.safetensors
Cold Oil Gothic Style SDXL_LoRA_Pony Diffusion V6 XL.safetensors
hf_model_48_20.safetensors
Cold Oil Gothic Style SDXL_LoRA_Pony Diffusion V6 XL.safetensors
Cold Oil Gothic Style SDXL_LoRA_Pony Diffusion V6 XL.safetensors
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.






