This is a place for experimenting with SD1.5 LoRA.
The main goal is overall enhancement rather than focusing on a single concept.
■Dora_nsfw_remember is intended to complement my test merge model, but since the training is done on NovelAI v1, they should work fine with its derivative models.
my test model: https://civarchive.com/models/1246353/sd15modellab
■nai_v2_highres is a high-resolution stabilizing DoRA for novelai_v2.
Please download the official checkpoint from the URL below.
I’ve also made a safetensors version just in case.
https://huggingface.co/NovelAI/nai-anime-v2
https://civarchive.com/models/1772131
■nai_v2_semi-real is a semi-realistic style DoRA for novelai_v2.
■I use OneTrainer for training and ComfyUI for inference.
■I will share my training settings and inference workflow as much as possible.
■If the prompt is short, the background may become simple or the style may lean toward realism.
By using the uploaded tipo_workflow, you can automatically generate longer prompts—so please give it a try!
■Sometimes saturation occurs due to overfitting and the model’s compatibility. Adjusting DoRA , prompt weight strength, or reviewing the cfg can help improve this.
■There’s also a high-resolution inference workflow using kohya_deep_shrink.
It expands composition and removes the need for high_res_fix.
1152px offers a good balance of quality, stability, and speed, while 1536px is more dynamic and detailed.
By the way, this Dora was created to give SD1.5 a level of concept understanding comparable to my PixArt-Sigma anime fine-tune.
SD1.5 with Dora applied will likely be the most compatible refiner—it's ideal for i2i tasks.
my pixart-sigma finetune.
https://civarchive.com/models/505948/pixart-sigma-1024px512px-animetune
■Please feel free to ask if you have any questions!
日本語での質問も大丈夫ですので気軽にお声がけください!
Description
DoRA (259.07 MB): Aesthetic DoRA (U-Net only)
DoRA (1.62 GB): OneTrainer backup data.
Training Data (304.58 MB): Inference workflow and TE-only LoRAs.
This is a DoRA for novelai_v2.
It was trained on an aesthetic dataset of about 20,000 images.
Training was done only on the U-Net, using multiple resolutions ranging from 1024px to 1536px.
Difference from my previous model:
The main difference from my nai_v2_highres_v06 is that this time, no AI-generated images are included in the training data. Other than that, it uses the exact same aesthetic dataset.
While including AI images makes it easier to achieve high quality, it can sometimes result in that distinct "AI style" you often see in many popular SDXL models. Because this DoRA avoids that, it produces a much more natural illustration style, as if it were drawn by a human artist. Additionally, this version was trained for more epochs.
It’s not a matter of which DoRA is strictly "better"—it's just a matter of preference, so please use whichever you like. You could even mix both! Personally, I prefer this new one because it yields more natural and diverse results.
Inference & Settings:
For inference, 832x1216 is the most stable, but generating up to 1024x1536 works without any issues.
For i2i (image-to-image), the image won't break down even at large resolutions of 2048px or higher.
I've also shared my workflow, so please feel free to use it as a reference.
Bonus (TE-only LoRAs):
As a bonus, I’ve included LoRAs trained only on the Text Encoder (TE).
Since training CLIP can be very unstable, I’ve provided multiple TE LoRAs ranging from 10,000 to 30,000 steps. Which one to use is entirely up to your preference.
It is recommended to use it in combination with u-net only.
Here is a general guide for the TE LoRAs:
10,000 steps: A good choice when you just want to add a very slight change.
17,000 - 18,000 steps: The perfect balance. You can use these safely at a weight of 1.0.
30,000 steps: Overtrained. CLIP is much more sensitive than the U-Net, so it breaks easily. However, it actually works quite well if you lower the weight to around 0.5.
(The steps in between fall somewhere in the middle of these characteristics.)
Using any of these TE LoRAs at a weight of 0.5 to 0.7 will act as a nice little spice to your generations