This is my experiment in creating an SD1.5 merge style model for i2i.I am satisfied with the current state, but I will update it if I feel anything is needed.or I might share ways to enhance this model rather than updating it directly.
■Since the model now has both anime and real versions, the detailed explanations have been moved to each model’s tab.
Both are merged models created by selecting multiple high-quality models with minimal artifacts.
■With the three models—asian, real, and anime—now available, it could be fun to adjust their mix to find your ideal style.
●asian 0.5 + real 0.5 might yield a more mixed, half-and-half look.
●asian 0.5 + anime 0.5 might produce a cute, 2.5D-style appearance.
Feel free to experiment with different ratios.
■Since this is just a merge, it shares a common SD1.5 limitation where NSFW tags may not be fully understood or followed.
I have decided to manage the concept-enhancing LoRA separately.
https://civarchive.com/models/1253884/sd15loralab
Of course, it can be used on its own, It is designed for i2i processing the models below.
https://civarchive.com/models/505948/pixart-sigma-1024px512px-animetune
■Depending on the situation, this extension may also improve colors and contrast.
https://github.com/Haoming02/sd-webui-diffusion-cg
https://github.com/Haoming02/comfyui-diffusion-cg
■Using external tools for level adjustment is also a good option.
Reducing gamma slightly while enhancing whites can improve contrast even further.
Using these should help achieve color rendering closer to that of SDXL.
■Surprisingly, generating at 768px or 1024px sometimes works fine.If you want more stability, merging with Sotemix could help.But since most LoRAs are trained at 512px, high resolutions can break the output.So it’s safer to use highres.fix or kohya_deep_shrink when using LoRAs.
Personally, I prefer i2i upscaling over highres.fix, as it tends to produce fewer artifacts.
Description
I thought it might be interesting to generate real-style images once in a while, so I created a merged model for that.
I've also prepared an inference workflow—feel free to use it as a reference.
I selected and merged about 15 high-quality real-style models with minimal artifacts.
To be honest, most real-style models tend to generate similar images, so there may not be much that sets this one apart from others.
■If the concepts are not recognized well, you might try using my Dora. It could improve recognition.
I tried it, and it didn’t stray far from the realistic style. Using it at a strength of around 0.2-0.5 provides a good balance between style and expressive flexibility.If you want to use it with a high weight, lowering the CFG until the artifacts disappear might help. At around CFG 3, it seems usable even with a weight 1.
adding the “realistic” tag may enhance the real-style output.
■Anime models are better at handling unrealistic concepts and things like eye color, so merging them to enjoy a 2.5D style could be a good idea. That approach would likely increase compatibility with Dora as well.
■To enhance detail, I recommend i2i upscaling. Using a deliberately noisy sampler during this process can make skin textures appear more realistic. In ComfyUI, there are many ways to intentionally increase noise, and the "Res Multistep Ancestral + beta" sampler is also a good choice.
While I’m not sure if it's the same approach, in WebUI-based tools, this extension allows you to add that sampler by default—so it might be worth trying.
https://github.com/MisterChief95/sd-forge-extra-samplers
"Detail Daemon" also intentionally increases noise, so it may be able to achieve a similar effect.
■This workflow uses "tipo" to automatically generate prompts and includes a 1024px i2i upscale process.
If you enter keyword tags, it will automatically add related tags, reducing the burden of creating prompts.