This checkpoint was trained on 3.3m images of normal to hyper sized anime characters. It focus mainly on breasts/ass/belly/thighs, but now handles more general tag topics as well. It's about 50%/50% anime and furry images as of v8. See change log article below for more version details, and future plans.
Note: This will be my final SD1x model. I wanted to see what the hyperfusion dataset was really capable of on sd1.5. So I let it train on 2x3090s for 10 months to squeeze every bit of concept knowledge out of it. This is the best concept model Ive trained so far, but it still has the usual SD1x jankiness. I probably kept the Text Encoder LR too high for too long (0.5x -> 0.3x).
Big shoutout to stuffer.ai for letting me host my model on their site to gather feedback. It was critical for resolving issues with the model early on, and a great way to see what needed improvement over time.
V9 is a v_pred model, so you will need to use the YAML file in A1111, or the vpred node in Comfy along with cfg_rescale=0.6-0.8 in both. A1111 will also need the CFG_Rescale extension installed.
I posted one old example using ComfyUI workflow here https://civarchive.com/images/64978187
Other links:
The OG hyperfusion LoRAs can be found here https://civarchive.com/models/16928
Also a back up HuggingFace link for these models
Uploaded 1.4 million custom tags used in hyperfusion here for integrating into your own datasets
Changelog Article Link
Recommendations for v9_vpred finetune:
sampler: Anything that is not a Karras sampler. Don't use Karras! Training with --zero_terninal_snr makes that sampler problematic. Also you will need to use the uniform scheduler in A1111, or "simple,normal" in Comfy at least
negative: I tested each of these tags separately to make sure they had a positive effect:
worst quality, low rating, signature, artist name, artist logo, logo, unfinished, jpeg artifacts, artwork \(traditional\), sketch, horror, mutant, flat color, simple shading
positive: "best quality, high rating" for the base style I trained into this model, more details in Training Data docs
cfg: 7-9
cfg_rescale: 0.6-0.8 rescale_cfg is required for this v_pred model. lower values tend to have less body horror, but darker images.
resolution: 768-1024 (closer to 896 for less body horror)
clip skip: 2
zero_terminal_snr: Enabled
styling: You will want to choose a style first. The default style is pretty meh. Try the new artist tags included in v8+, all tags can be found in the tags.csv by searching for "(artist)". See example images for art styles.
Lora/TI: loras trained on other models will not work with this model, even loras trained on other v_pred models are not guaranteed to work here.
Recommendations for v8 LoRA:
sampler: Anything that is not a Karras sampler. Don't use Karras! Training with --zero_terninal_snr makes that sampler problematic.
Lora/TI: If you are using LoRA's/TI's trained on NovelAI based models, they might do more harm than good. Try without them first.
negative: low rating, lowres, text, signature, watermark, username, blurry, transparent background, ugly, sketch, unfinished, artwork \(traditional\), multiple views, flat color, simple shading, unfinished, rough sketch
cfg: 8 (it needs less than LoRA hyperfusion) resolution: 768-1024 (closer to 768 for less body horror)
clip skip: 2
styling: Try the new artist tags included in v8, all tags can be found in the tags.csv by searching for "(artist)"
Tag Info (You definitely want to read the tag docs, see :Training Data)
Because hyperfusion is a conglomeration of multiple tagging schemes, I've included a tag guide in the training data download section. It will describe the way the tags work (similar to Danbooru tags), which tags the model knows best, and all my custom labeled tags.
For the most part you can use a majority of tags from Danbooru, Gelbooru, r-34, e621, related to breasts/ass/belly/thighs/nipples/vore/body_shape.
The best method I have found for tag exploration is going to one of the booru sites above and copying the tags from any image you like, and use them as a base. Because there are just too many tags trained into this model to test them all.
Tips
Because of the size and variety of this dataset, tags tend to behave differently than most NovelAI based models. Keep in mind your prompts from other models, might need to be tweaked.
If you are not getting the results you expect from a tag, find other similar tags and include those as well. I've found that this model tends to spread its knowledge of a tag around to other related tags. So including more will increase your chances of getting what you want.
Using the negative "3d" does a good job of making the image more anime like if it starts veering too much into a rendered model look.
Ass related tags have a strong preference for back shots, try a low strength ControlNet pose to correct this, or try one or more of these in the negatives "ass focus, from behind, looking back". The new "ass visible from front" tag can help too.
...more tips in tag docs
Extra
This model took me months of failures and plenty of lessons learned (hence v7)! I would eventually like to train a few more image classifiers to improve certain tags, but all future dreams for now.
As usual, I have no intention of monetizing any of my models. Enjoy the thickness!
-Tagging-
The key to tagging a large dataset is to automate it all. I started with the wd-tagger (or similar danbooru tagger) to append some common tags on top of the original tags. Eventually I added an e621 tagger too, but I generally only tag with a limited set of tags and not the entire tag list (some tags are not accurate enough). Then I trained a handful of image classifiers like breast size, breasts shape, innie/outie navel, directionality, motion lines, and about 20 others..., and let those tag for me. They not only improve on existing tags, but add completely new concepts to the dataset. Finally I converted similar tags into one single tag as described in the tag docs (I stopped doing this now. With 3m images it really doesn't matter as much).
Basically any time I find its hard to prompt for a specific thing, I throw together a new classifier, and so far the only ones that don't work well are ones that try to classify small details in the image, like signatures.
Starting in v9 I will be including ~10% captions along side the tags. These captions are generated with CogVLM.
I used this to train my image classifiers
https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification
Ideally, I should train a multi-class-per-image classifier like the Danbooru tagger, but for now these single class-per-image classifiers work well enough.
-Software/Hardware-
The training was all done on a 3090 on Ubuntu. The software used is Kohya's trainer, since it currently has the most options to choose from.
Description
Increased image count to 1.4 million.
Included artist tags for better styling choices, search for "(artist)" in the tags csv.
This version was trained on SD 1.5, so there is no NovelAI influence in this checkpoint unlike previous versions.
More image classifiers trained, and existing classifiers improved (list of classified tags under Training Data section)
Training Notes:
~1401k images
LR 3e-6
TE_LR 2e-6
batch 8
GA 32
default Adam optimizer
scheduler: linear
base model SD1.5
No custom VAE, and none needed for inference unless you prefer one
flip aug
clip skip 2
375 token length
bucketing at 768 max 1024
bucket resolution steps 32 for more buckets
tag drop chance 0.1
tag shuffling
--min_snr_gamma 4
--ip_noise_gamma 0.02 (lower than v7)
--zero_terninal_snr
custom code to drop out 75% of tags 5% of the time to hopefully improve short tag length results
about 70 days training time (pray for my GPU)