也算兑现承诺了吧,把我早期训练并使用的lora分享给大家。
基于schoolmax 2.5d训练,后来我对比了一下这个模型和0.5chillout+0.5AOM2的完美世界基本上区别很小,
所以可以认为是基于chill训练的,但是schoolmax 2.5d颜色比较舒服一点。
这个模型对于脸部权重的修正比较中性(但拉高权重还是改变挺大的,所以不是欠拟合),因为我不是拿特定脸训练的,而是从生成结果里选择了一些自己觉得好看的图片进行二次训练。
样本数量从70到1300都有,但是1300的大样本(K-AIgirl-face)训练不是很成功,反而是低分辨率小样本的K-AIgirl-face2效果比较好,K-AIgirl-face3则有比较大的偏色问题,过拟合情况比较严重。
推荐用法:
0.2 K-AIgirl-face-re/K-AIgirl-face2
+
0.2 koreanDollLikeness_v15
作为基底,配合chilloutmix,可以大大改善生成人物的脸型和脸部背光的问题,对于其他lora的兼容也很好。
甚至可以把这两个lora融到大模型里面,毕竟lora用多了速度损失实在太大。
Description
FAQ
Comments (10)
关注你很久了,终于放模型了,谢谢
大佬可以放一下hands32吗
吃瓜群众给总结一下
1,拿没拿别人东西,拿了
2,犯不犯法,两边都是发黄图的,不掰扯这个,况且这玩意儿又没版权
3,道不道德,看屁股
4,到底有没有授权,他说过随意,后边他说不让用了,对不起没看见
5,是不是靠这个赚了第一桶金,图在那摆着
我再给你总结一下,K的图好看,单用那个LORA生成的图其实不太行
什么事件? 在哪吃瓜?有帖子么?
大佬 有群吗 想和你学习一下脸部lora的炼丹技术,试了好久全身照都不太行
大佬可以分享一下第一张图融合的大模型吗?求求了
TRANSLATED:
It can be considered as a fulfillment of my promise. I will share with you the lora that I trained and used in the early days.
Based on schoolmax 2.5d training, I later compared this model with the perfect world of 0.5chillout+0.5AOM2. The difference is basically very small.
So it can be considered based on chill training, but the schoolmax 2.5d color is more comfortable.
This model is relatively neutral in the correction of face weights (but increasing the weights still changes a lot, so it is not underfitting), because I did not train with specific faces, but selected some that I thought looked good from the generated results. images for secondary training.
The number of samples ranges from 70 to 1300, but the training of the large sample (K-AIgirl-face) of 1300 is not very successful. On the contrary, the low-resolution small sample K-AIgirl-face2 has better effect, while K-AIgirl-face3 has A relatively large color cast problem, and over-fitting is serious.
Recommended usage:
0.2 K-AIgirl-face-re/K-AIgirl-face2
+
0.2 koreanDollLikeness_v15
As a base, combined with chilloutmix, it can greatly improve the face shape and facial backlight problems of generated characters. It is also very compatible with other loras.
You can even integrate these two loras into a large model. After all, the speed loss is too great if you use too many loras.

