<游戏AI研究所>"Game AI Research Institute"
About this version
XL version test version: This model adopts the workflow form of Comfy UI. Since it is a test version, four styles are open for everyone to use! In the future, 60 styles of triggers will be gradually opened, and each style corresponds to different trigger words for your reference!
TAG:
gmic_\(3dicon\)
gmic_\(2dguofeng\)
gmic_\(2dXIANTIAO\)
gmic icon_\(xieshi\)
The above images were generated without using any other optimization methods for tagging. We welcome everyone to collaborate with us and make progress together.
模型仅用于游戏项目的AI产出交流和实验
模型作者: 游戏图标研究所
模型讨论组(可以抢先免费体验未发布模型):
DISCORD:https://discord.gg/njBMYJ7mRF(推荐)
QQ频道:https://pd.qq.com/s/6ekpt1xei?businessType=9(推荐)(可以抢先免费体验未发布模型)
The models of the Game Icon Research Institute have entered into a strategic partnership with C Station, Similar websites are prohibited from copying these models.They are only for personal study and communication purposes. The right to interpret them belongs to civitai & the Icon Research Institute.
Version 2.0 is suitable for creating icons in a 2D style, while Version 3.0 is suitable for creating icons in a 3D style.
The version is not about the newer the better
Description
1. Fix the problem that the structure of the map in version 2.0 is strange
2. Fix the problem of fuzzy drawing in version 2.0
3. Fix the problem of monotonous drawing
4.Avoid failing to achieve the expected effect during the use of plug-ins
FAQ
Comments (14)
Thanks a lot! LOL
期待后续的更新
那个绿色的像矛一样的长出藤蔓的图 咒语可以分享下吗?
You can come to the QQ group to discuss, this is tested, I did not run the skill icon, it will be launched in version 3.0
It‘s Coooooooool!!!!
is this for item only ? not a skills ?
yes
I also made a similar large model for work reasons, but I didn't upload it. I want to share my experience with you, because I found that although the large model has been trained in painting style, it is still not good for specific items (Of course, this refers to the training material does not contain the items), of course, if the use of controlnet can also be a good generation, I still prefer to let my model know it (that is, the training material does not contain the items) Do you have any ideas to share? I can think of two possibilities, the first is more training material, and the second is to combine more training content models. What do you suggest?
I am also trying to launch version 3.0 by generating images from text. Perhaps because of the learning rate, generating images from text cannot achieve my effect. I try to readjust and then train
@tunan996 熊曰:呋爾會註果山類物哮森哮告物嗄更很果訴寶註人唬寶拙告我咯嗚萌象動嗚破誒噤破非森森喜呦你雜爾發咯唬噗現盜取噤非訴人嗥啽很嗚性嗷哞類雜拙呆意雜常吃








