This is a place for experimenting with SD1.5 LoRA.
The main goal is overall enhancement rather than focusing on a single concept.
■Dora_nsfw_remember is intended to complement my test merge model, but since the training is done on NovelAI v1, they should work fine with its derivative models.
my test model: https://civarchive.com/models/1246353/sd15modellab
■nai_v2_highres is a high-resolution stabilizing DoRA for novelai_v2.
Please download the official checkpoint from the URL below.
I’ve also made a safetensors version just in case.
https://huggingface.co/NovelAI/nai-anime-v2
https://civarchive.com/models/1772131
■nai_v2_semi-real is a semi-realistic style DoRA for novelai_v2.
■I use OneTrainer for training and ComfyUI for inference.
■I will share my training settings and inference workflow as much as possible.
■If the prompt is short, the background may become simple or the style may lean toward realism.
By using the uploaded tipo_workflow, you can automatically generate longer prompts—so please give it a try!
■Sometimes saturation occurs due to overfitting and the model’s compatibility. Adjusting DoRA , prompt weight strength, or reviewing the cfg can help improve this.
■There’s also a high-resolution inference workflow using kohya_deep_shrink.
It expands composition and removes the need for high_res_fix.
1152px offers a good balance of quality, stability, and speed, while 1536px is more dynamic and detailed.
By the way, this Dora was created to give SD1.5 a level of concept understanding comparable to my PixArt-Sigma anime fine-tune.
SD1.5 with Dora applied will likely be the most compatible refiner—it's ideal for i2i tasks.
my pixart-sigma finetune.
https://civarchive.com/models/505948/pixart-sigma-1024px512px-animetune
Description
■We’ve trained with twice the number of steps as before. Judging from the sample images, the results look significantly improved.
■It works fine at strength 1, but if the style is too strong or unstable, try lowering it to 0.75or 0.5 for balance.
■I’ll also share the OneTrainer training backup data—use it as a reference for your training settings.It’s also possible to resume training from this DoRA checkpoint.
That said, I think training at 1536 resolution was excessive; 1280 px should be sufficient to keep high-res generation flexible.
Also, if VRAM use isn’t much different, AdamW is preferable to AdamW 8-bit as the optimizer.