Hassaku aims to be a anime model with a bright and distinct anime style.
My Discord for everything related to anime models and art.
If you'd like to support my work, you can do so at: SubscribeStar, every bit helps and is truly appreciated!
____________________________________________________________
Supporters:
Thanks to my supporters tamashiicolle, pttcot, SETI and Kodokuna
____________________________________________________________
Using the model:
The model was trained primarily on images with minimal disruptive elements, such as floating text, logos, speech bubbles, or signatures. If any of these elements appear in an image, please include "signature" in your negative prompt.
Danbooru Metadata and franchise tags are not used in training, so please avoid using them in prompts. Metadata such as "highres" and franchise tags like "re:zero kara hajimeru isekai seikatsu" are not used in training.
A CFG scale between 3 and 7 is recommended. For best results, use CFG 6 with the Euler a sampler.
Refer to the example images, to understand, how the model should be used.
The model is designed to be simple and straightforward to prompt. Use the following tag order:
Number of characters
Character Tags
All remaining tags
Example: 1girl, rem \(re:zero\), standing, masterpiece, upper body
Here are some resolution options for SDXL:
1536 x 640
1344 x 768
1216 x 832
1152 x 896
1024 x 1024
896 x 1152
832 x 1216 (most recommended)
768 x 1344
640 x 1536
Lora use:
NoobAIs Loras working for the most part better in compare to Illustrious loras
Versions 2 and 3 is less compatible to most loras trained on Illustrious or Noob, because it got further trained away of both. Style A is still the most compatible for it.
______________________________________________________
Version and License info:
Below V1 merges ANIMAGINE XL 3.0
Version V1 use Illustrious-XL & WAI-NSFW-illustrious-SDXL with additional training
V2 is trained on its own and don't include any extra merge. Base was V1, it is trained to include newer or missing characters. It was also used for further training tests
V3 is a merge of Illustrious-XL and V2.2, to fix some issues and was also trained to include newer or missing characters
All Models using the Fair AI Public License 1.0-SD license.
Description
FAQ
Comments (54)
Veni. Vidi. Veni.
Veni. Veni. Veni. Veni. Veni. Veni. Veni. Veni. Veni.
very good to see v3.1 model is coming
Wait I love this one so much....
nice
Since Patreon banned me (with it loosing all my funding), because of nsfw capable AI models. I would be really happy if some wants to support me on my new subscribeStar account👍
so patreon does not allow this kind of pages?
@Nephilim i think it’s time for people to be aware of certain things. patreon doesn’t really want nsfw content because visa and mastercard don’t want it. look it up online, you’ll see how much they want to censor. they want to censor anime, manga, horror games, pornographic games, and more. why do you think suddenly my favorite (legal) manga site is shutting down? because visa and mastercard want to close it. they have enormous power and can do whatever they want. ordinary people should stop using visa and mastercard.
we shouldn’t promote censorship, we should fight against it.
@Nephilim yeah pretty much what @iokk said
@Ikena well, got it, i'm really sad that this is happening, i decide to monetize over something i like to do and I can't damn
The ban sucks, but this line has always been in Patreon’s TOS:
"Accounts that primarily allow access to generators, tools, or software that use machine learning or AI technology to produce 18+ nudity or explicit imagery are not permitted on, and may not be funded through, Patreon."
Just use the MIR system :D
"Metadata and franchise tags are excluded"
From what I can see from the posts, pretty much everybody uses quality tags. Does this mean they're just using them out of habit and they don't actually do anything for the generation? What about embeddings? Also, I can see the example images use negative tags like "bad quality". Do those work?
Quality tags are not Metadata and franchise tags, i train them in and they should be used. "bad quality" also, but it don't have much of a impact like "worse quality".
Cool. gonna try this for sure
How can I use Hassaku XL (Illustrious) v2.2 again? please
It need enough bids to be on the generator
I'm a bit late to the party. I was wondering if the latest V31 version has updated character name recognition, for example, adding characters from new anime series.
Data-cut early September, adding data in the training progress. Some characters like alice_(genshin_impact) will not work, even if already included. Only charactere where added with at least 100 images on danbooru to the training and most character still need supporting tags
@Ikena I see. I tried using the character name 'yanami_anna' (which is correctly recognized by some other models), but V31 couldn't identify her. Anyway, I'll try some other options.
@Black130516 yanami_anna seems a odd one, i adding data by looking how old the tags are of characters. I add data of the last 2y (missing timeframe to base illustriuos). Her tag is already 4y old, to that time, she only had two images, https://danbooru.donmai.us/posts/4818509?q=yanami_anna and https://danbooru.donmai.us/posts/4632491?q=yanami_anna. She got only popular last year. Because of that, she is not in my training data. Will add her now. I do that to not train unnecessary data, already included in the base illustrious model (saves training costs).
@Ikena No big deal. I can always use a LoRA. It's just my weird little hobby—I love testing if new models have added support for those new, niche tags. Haha!
Cool
3.1 is so magnificent...
Can i merge this model an post it on Civitai ?with credits to you
Sure you can
Where can I get the "sdxl_vaeFix.safetensors" used in the sample image?
I can't find it anywhere.
The output results are not as good as the sample, probably because the VAE is different.
@Ikena Thank you.
Does this mean you're using "sdxl_vae_fp16_fix.safetensors → sdxl.vae.safetensors"?
Sorry, I couldn't tell because the names are different.
I thought the internal names might be the same, so I printed it output myself to check, but VAE doesn't seem to appear in PNGinfo, so I'm not sure why...
@bafangsui156 what do you use? my model have it baked in, normaly UIs like auto1111 use it automaticly
@Ikena When I first commented, I was using stable diffusion forge to output VAEs automatically.
While waiting for your reply, I realized that the lack of a "scheduler" was the problem, so I installed forge classic. However, even when I output using automatic, the results were unstable.
After that, I installed the "sdxl.vae.safetensors" you recommended and output about a 1000 times, but the results were still inconsistent.
So, looking back at the samples now,
I noticed that Compared to alice thymefield, trigger \(zenless zone zero\)'s head outline is significantly thicker. The highlights are also completely different.
Cipher \(honkai: star rail\) and the first sample are so different that they look like different models.
Comparing them like this, the art style of v3.1 doesn't seem very stable.
In conclusion, I think the art style becomes unstable when you add character names.
I wanted to output cipher \(honkai: star rail\) in a style similar to the first sample, but it didn't work out, so I'm disappointed.
I will try printing about 1,000 sheets with VAE set to automatic.
@bafangsui156 Cipher \(honkai: star rail\) use a artist tag (artist khyle.), but the low style interfierence with characters is intentional, to not change the style of more unique looking characters and the eastetics of them (also to not make the model as "stiff" like the others)
3.1 looks great, it finally works with close up and portrait. also details are better.
i was waiting for a fix version but after alot testing around its rly hard to get good results for 2girls... because of duplicates or extra faces, same for solo stuff in high resolution dupes appears and ignores solo prompt , solo is not without a reason in every image what makes it kinda hard if u want a duo on a image and not 3 characters or a duplicated one. same for the anatomy for a basic image generation its still fine but as soon u try some more action it breaks sadly : pro no skin shine anymore hand + finger gen is ofc better . thats it .yeah i hope the next version can do this better without duplicates in more characters or extra hands /legs if asked which resolution ? 1536x1024 its the same if i would use a lower reso it can happend to there .what would i love if the next version has the pros that i wrote and the negatives are gone: (stable anatomy,no dupes or extra faces that appears out of a body)<- no franchise tags ofc and just a basic prompt set . dont give up its just my test results after some hours because i love testing stuff
I actually open the model more up, to have more freedom, to have more variety in poses. Admittingly my model have a bit more chances of messy limbs with multiple people, as tradeoff. But please don't use the outer extreme of 1536 on one side and extend the other side over max. The trained resolutions for this is 1536 x 640. Maybe try 1216 x 832 out, thanks for the review!👍
@Ikena ty for the fast answer i know ,first test results was at 1280x1024 same thing there dupes can happend without solo prompt from 1 up to 3 dupes. same with solo prompt it can be ignored. thats why i said lower resos have the same thing at higher resos . stable tests are a good point on higher resos + full body u can see there if the anatomy is nice or not :-) and i test rly alot around
does it support artist tag?
Why are some artist styles in danbooru not working? Is it because there are less than 100 samples?
yes. More images of a certain artists, the better the results, using that certain tag
very nice love it!
very good
I noticed that version 3.0 has started generating noticeably worse results with the tags I used a week ago... Does this mean there have been changes or am I missing something? I noticed that version 3.0 started generating noticeably worse with the tags I used a week ago... Does this mean there have been changes or am I being stupid?
Noting changed in 3.0, it is same model up since 3months
If you're using the civitai generator, they changed their generator settings, you have to scroll down to change it back to the old way
Since some feedback came in for 3.1, it seems the style merge is to weak now. Style is to inconsistent and is not strong enough to prevent total collapse of images or to messy results, if the promts are a bit off the norm. A new update is in the work, but i upload it without early access.
Note: That is only, if you use the model pure, without any extra loras.
You can see what i do or talk to me on my discord:
https://discord.com/channels/1072156418402156624/1425222621955625250
Any feedback is wellcome!
Muito bom
In my opinion, version 2.2 is the best
v3.1 is a big improvement over v3.0, but I’m having trouble controlling it. It might just be user error, and I’m open to any tips because I really like the results when it works.
This model is useless now after previously making the best gens for me...
Great model by itself but even better when merged with 1.3 style a imo.
awesome
👍












