Hassaku aims to be a anime model with a bright and distinct anime style.
My Discord for everything related to anime models and art.
If you'd like to support my work, you can do so at: SubscribeStar, every bit helps and is truly appreciated!
____________________________________________________________
Supporters:
Thanks to my supporters tamashiicolle, pttcot, SETI and Kodokuna
____________________________________________________________
Using the model:
The model was trained primarily on images with minimal disruptive elements, such as floating text, logos, speech bubbles, or signatures. If any of these elements appear in an image, please include "signature" in your negative prompt.
Danbooru Metadata and franchise tags are not used in training, so please avoid using them in prompts. Metadata such as "highres" and franchise tags like "re:zero kara hajimeru isekai seikatsu" are not used in training.
A CFG scale between 3 and 7 is recommended. For best results, use CFG 6 with the Euler a sampler.
Refer to the example images, to understand, how the model should be used.
The model is designed to be simple and straightforward to prompt. Use the following tag order:
Number of characters
Character Tags
All remaining tags
Example: 1girl, rem \(re:zero\), standing, masterpiece, upper body
Here are some resolution options for SDXL:
1536 x 640
1344 x 768
1216 x 832
1152 x 896
1024 x 1024
896 x 1152
832 x 1216 (most recommended)
768 x 1344
640 x 1536
Lora use:
NoobAIs Loras working for the most part better in compare to Illustrious loras
Versions 2 and 3 is less compatible to most loras trained on Illustrious or Noob, because it got further trained away of both. Style A is still the most compatible for it.
______________________________________________________
Version and License info:
Below V1 merges ANIMAGINE XL 3.0
Version V1 use Illustrious-XL & WAI-NSFW-illustrious-SDXL with additional training
V2 is trained on its own and don't include any extra merge. Base was V1, it is trained to include newer or missing characters. It was also used for further training tests
V3 is a merge of Illustrious-XL and V2.2, to fix some issues and was also trained to include newer or missing characters
All Models using the Fair AI Public License 1.0-SD license.
Description
- Trained on up-to-date data, data cut last month. However, characters such as Ye Shunguang (ZZZ) still require many support tags and it will only depict her head right.
- Slightly improved prompt adherence.
- Brighter faces, less in shadows.
- More stable overall output, though somewhat more restrictive.
- Fewer unintended multiple-view compositions.
- Characters previously trained new in version 3.2 are now better trained in.
Worse:
- More sensitive to poorly structured prompts or a low number of prompt tags.
- Eye colors may occasionally differ from what is specified in the prompt (can be avoided by lowering cfg to 6).
- Style can be more inconsistent on some concepts, especially for characters.
Notes:
- Style can be inconsistent, especially for characters. Because the style merge is only strong enough to avoid breaking artist and character tags and that additional loras don't break the model to much.
- Use "Euler a" as the sampler, this is the sampler the model was primarily tested on.
- If only one character is desired, use “1girl” and add “solo” if necessary.
- To avoid nsfw art, put "nude" into the negative
The model was additional trained longer on a more unbiased dataset.
Now trained on bf16, prodigy with simplified ademamix, Factored + stochastic rounding, OrthoGrad to prevent overfitting and active Kourkotas Beta.
FAQ
Comments (36)
amazing model , i wanted to make a merge with this checkpoint , i was wondering if its alright?
Sure its alright!
So when you make a new or updated model, does all the character "data" get carried over or do you lose some in the process? not sure how model training works. Either way I love Hassaku XL, keep up the good work!
Yes, it get carried over. But some characters getting a bit lost when the tag of the characters have some similarly. Like i train "seed (zenless zone zero)" in, some of her traits get into "trigger (zenless zone zero)", because of "(zenless zone zero)" in the tag. Making "trigger (zenless zone zero)" less precise, only training longer on multiple characters, makes it more distict on "trigger" and "seed" in the tag. I noticed that because trigger works actually a bit worse in compare to 3.2. So we got seed into the model, but trigger works worse. But overall, all characters are carried over (would be strage, if not).
Is it the same if you use underscores instead of spaces, at least for character names?
@Nyaa314 Please don't use underscores, it is not trained in with underscores
Tried the update and i guess some artist's aren't still updated in here because I tried "mustblove" and "geulimykun" and gave base results. Im assuming I have to just use LoRAs for them?
I mostly focus on updating new characters. mustblove has only 96 images on Danbooru, so a LoRA is definitely needed. As for geulimykun_(skbyunea413), there are only two images of him in my “do not forget” dataset, again, a LoRA is needed.
I can determine which characters are new by checking Danbooru, but updating artists is much more complicated. You have to test which artists work with my model and which do not. Because of this, training artists in is completely random.
Could you update of Tensor post?
I feel like I'm missing something.
All other Illustrious-based models I used work great but this one produces almost entirely malformed results (blurry, disfigured, oversharpened, "melting", etc.)
And it happened on multiple versions I tried
Since it's very popular and highly rated I doubt it's an issue with the model itself, but clearly there has to be something I have to be doing wrong with my generations.
Any ideas? Is there something I should be doing significantly differently from standard IL models?
what promt do you use? What sampler?
I also noticed I tend to get extremely varied results, sometimes the model isnt very consistent, cfg 3 tends to be the absolute worst so I stick to cfg 5 but other than that with the default workflow and recommended settings as per the instructions on the page results are quite inconsistent
@Ikena Happens for various prompts. The only thing in common is that I tend to do rather simple short prompts (basic quality stuff, 1girl, character, pose, maybe some extra details).
As for sampler it seems to both happen on my go-to DPM+_ 2M and the (iirc recommended) Euler a.
@MIC132 short prompts is a bit of a weakness, what i stated in the update and DPM+_ 2M makes rather blurry results, seems i got into the same issues, that i had in V2, only a bit weaker. Make in 3-4d a new update for it.
I am having the same problem, all other illu basemodel works nice for me
I just tried this in my A1111 and it works flawlessly
@KaiserWilhelm Are you doing long prompts?
I'm trying to figure out if that might be the core issue, since I almost always do short prompts (I almost never go over 10 tags).
I also use A1111 so the software clearly isn't the issue.
Always been a fan of Hassaku, but I'm getting a serious case of AI-face with V3
After some testing, i agree. I look if i can make a update, where the eyes/faces are better.
@Ikena Is the issue inherent to Illustrious-XL? After some testing, it seems like V2.2 doesn't have the issue
@seulvanion Even 3.2 did not have this issue as much as 3.3
Hi, for any future version, you can train the model to generate more view angles (like more foreshortening angles, more focus angles or more extreme angles from above or from behind) and enhancing the existing ones? I've been using this model since v1.3 and is my go-to for AI anime generation, and the model just keeps better and better every update. I'm excited to see what V4 or V5 will be like. Very good model and keep improving
All image composition tags https://danbooru.donmai.us/wiki_pages/tag_group%3Aimage_composition should be trained in. Can't really change a composition tag. Its already hard baked into the model. Except if some composition tag is weakly trained and needs improving. But any default standard tag like "from behind" will hardly change. I more or less only try to make those tags more stable with style merges.
My base trained model behaves like pure Illustriuos or noob. So its every time a challenge, to make it work like every other illustriuos model, like WAI, that don't get trained, but getting incest merged, to be more stable. I can't do that, because it would lose added knowledge.
Testing 3.3, and despite training supposedly being up to last month, Yuzuha from ZZZ barely works.
Meanwhile, Eri from Blue Archive works pretty well, despite being a newer character with less art. What gives?
do you use the right tag "ukinami yuzuha"? like 1girl, ukinami yuzuha, red hair, green eyes, skirt, cardigan. But true, alice thymefield works somehow much better, even when yuzuha got trained in alongside her. Eri don't have a complicated design, so she got in without any problems it seems (i don't test any new character trained in, its probably over 1000 new characters over the initial illustrious model release to now)
@Ikena I'm having a similar issue but with yumemizuki mizuki from Genshin Impact, despite being a character from a while ago, she doesn't seem to work at all even comparing to very new characters like Lauma and Columbina. I've tried "yumemizuki mizuki" and also adding "\(genshin impact\)" as I've found adding the game tag helps a lot for most characters, even if its not technically in their danbooru tag.
@Cloverful I managed to get Yuzuha working with this prompt: (ukinami yuzuha \(zenless zone zero\), green eyes, red hair, twintails:1.2)
idk why adding the game tag like that helps, but it does, a lot. I also put "(yixuan \(zenless zone zero\):1.2)" in negative prompt
I didn't test her outfit, and it will probably need some tweaking for consistency, but its close.
@Ikena ...Huh, adding in "cardigan" as a tag did the trick. Without it included, Yuzuha isn't looking right at all.
So yeah, she's in the database. And I'll admit I have no idea how model training works, but it might be worth looking into why Yuzuha needs that specific tag to come out consistently.
What about the adetailer? On WAI-illustrious-SDXL
I had no problems, but at this v1.3-Style A, the face mutated
I think I fixed it by deleting unnecessary tags that weren't related to the face in the detailer, but I didn't need to do that with WAI.
In v3.3, if you put too much Lora on, it would become blurry or disintegrate, so you had to be selective.
Pretty much issue with any model if you put too many (3 or more) LoRAs / too long prompts ^^;
If for some reasons you need the LoRAs try lowering the CFG to 3-4.5, that yielded me better results when using multiple LoRAs. Of course it adheres less to the prompt and is more creative.
Tweaking out a little, am I the only one who suddenly had Hassaku+Detailed Eyes+USNR+Pony+Genesis start churning out very basic anime-style instead of the semi-realism that set of LoRAS gets? Style completely shifted earlier today and I have no idea why.
Did you ever get an answer to this?
@thelastlimit No.
Please add the Scarlets' from Nikke. 🙏😭
Details
Files
Available On (3 platforms)
Same model published on other platforms. May have additional downloads or version variants.












