3 version:
v1 v2 v3 lora for sd 1.5
NoobAI-XL-v1: sdxl lora for this
All versions were trained on approximately 350 illustrations and fanart of MapleStory.
additional tags: small body, big head to achive the atonomy/style body of maplestory character
Description
version 3: BIG CHANGES, this is not Lora but Locon (idk what is the true power of locon but i got the better result now so i guess it helped a lot) or LyCORIS (they must be a big fan of lycoris anime lol) trained with dim 32 alpha 16 and 768 RESOLUTION of course with the help of adafactor again and I also reduced the dataset by removing unnecessary pictures that are either of low quality or don't have a big head and small body , althought i still trained it based on maplestory dataset but the style is like an upgrade from the original style and color (, but this model is quite overfit so you have to use hires fix), what i mean is that this locon now have more vibrant colors, more sharper, eyes quality is much more better now
, because the dataset is now more focus on the maplestory illustration so now it can draw closer to maplestory character style than the version 2 and 1 maybe some people will like or dislike it, but they can still use versions 1 and 2, which have more original colors
I figured out that the problems of version 1 and 2 are the shorter the prompt the more less accuracy lora can draw the style that's why if you put only the chibi 1girl in the prompt then it gives you the perfect style but if you copy the long ass prompt then the style will be immediately distorted but version 3 is better now
example : with 8k, 4k, ,1girl, chibi,solo,city,full body, you get this
but if too specific about the character you may have this (long big body now)
Negative prompt and (highres:1.1), best quality, (masterpiece:1.3) (my old prompt) have a significant impact on the style. I don't know which is the best for you all, but I am currently using 4k, 8k, and easynegative embedding, which give me the best results
Different models give different results
hands still fucked up that's it i'm very sorry about four fingers
if you want more "big head, small body" just do the trick (chibi:1.x)
this maybe the last locon i made unless the LoHa from LyCORIS has anything even more significant impact on the quality of the style
The only reason I made this lora/locon was just to draw my character, but it seems like I failed. Perhaps other people can make better use of this lora/locon than I can
version 2
I trained new lora with the same old dataset with new learning rate,text lr , with the help of adafactor, the result is surprise, big changes are
1. more colorful
2. can regconize stuff better
3. line is kinda thin now i guess
4.many variants for 1 character (time to gacha)
In my opinion, this one will have more quality results.
but sometimes version 1 style will be closer to the game
i just did a experiment with this (chibi:2) and the result is still good, i feel like it style even more closer to the usual character in illustration in some cases lmao
version 1
lora trained on hundred of maplestory illustration,npc artwork,fanart,some other fanart that could give a better nsfw pic in case you need,
Recommend options:
put chibi in prompt to ensure the style
lora good at 1. and 0.7-0.8 when combined with other loras like 2 vtuber pics below
i mainly use DPM++ 2M KARRAS to generate pics .
Don't put nsfw in prompt if you want sfw because i want it good at nsfw too
sometimes it maybe not look like the style much just click the generate button again lol, ok now i will be back to grinding to 275,btw my main is shade, but it's oke if no one knows this dude
FAQ
Comments (26)
please tell me What is the vae used,i can not Restore the example .
kl-f8-anime2
Fun fact:
The author of LyCORIS haven't watched lycoris recoil.
He just like equinox flower (aka lycoris radiata)
lol
Seems to work as intended most of the times but I can't replicate any of the sample images even with the same model. Anyone has the same problem?
i'll let you know that "painting by bad-artist-anime" is also an embedding along with easyneagative
@leoparker nvm, the reason was that I wasn't using the Locon as an Additional Network. Thanks tho!
@lol234 can you help? :( i cant seem to get it to work , i tried putting all the values like prompts , negative prompts , sampling method , sampling steps . seed , cfg scale ,not sure what this model means : 7th_anime_v3_C , alsonot sure how to add locom as an additional network
I tried to make in this style, but style didn't apply or get error message
"modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check."
I need more prepare about something?
probably you are using a broken model, you can find more about it in this https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6923
Hi, what other parameters were there? what is the LR and loss function? What size was the dataset? I would be happy if you would answer me about other parameters during training.
you can find all this information under Training info in Additional Networks tab
could you give me the link to the checkpoint, please ?
also 7th anime you can google it too
Hi, Thanks for your great effort. I hope to train and develop Maplestory style LoRA model like your version. Would you mind if I ask the way to get maplestory characters dataset?
you can get illustation from orangemushroom(.)net,HaRepacker from https://github.com/lastbattle/Harepacker-resurrected to get all illustations from wz file,also i did get some from pixiv
also from maplestory(.)fandom(.)com/wiki/MapleStory too
Thank you so much! Hope you have a great day. :>
Thanks, are you interested in in-game player style? Which can be used in some game.
I once considered it, but I realized that the pixel style in games is quite challenging because the graphics are too small. This makes it very difficult for both the Tagger and the model to recognize the details of the characters, including the boss. The Tagger and model may find it easy to recognize objects like houses, trees, and backgrounds, but they struggle to identify things like clothing and weapons. Drawing outfits, shoes, and hats with just a few pixels is a real challenge. If the pixels are not in the perfect position, the image ends up looking like a mess. so if you train it yes it maybe can draw a random character with clothing may look good for you but it will may hard to change to specific clothes, maybe if game let me zoom in the character like the feature in Home, it may be more easy to do it
so this is lora or lycoris ,i mean V3 ?
lycoris, it's locon
@leoparker Hold on, I never noticed it was Locon, I didn't installed Lycoris and only used it as a Lora... and it works !
Is that normal ? Are you normally able to use Locon as a Lora without any extension ? that's weird :O
@AdriBoc idk lol maybe it supports locon now?
Where should I save the files I received here? Is there anything else I should download besides WebUi and Stable Diffusion?
Details
Files
maplestory.safetensors
Mirrors
maplestoryStyle_v30.safetensors
maplestoryStyle_v30.safetensors
maplestoryStyle_v30.safetensors
maplestory.safetensors
maplestoryStyle_v30.safetensors
maplestory.safetensors
maplestory.safetensors
maplestory.safetensors
maplestoryStyle_v30.safetensors
maplestoryStyle_v30.safetensors
maplestory.safetensors
maplestory.safetensors
maplestoryStyle_v30.safetensors
maplestoryStyle_v30.safetensors
maplestoryStyle_v30.safetensors
maplestoryStyle_v30.safetensors














