IllusXL Negative Embeddings
Use these with caution: YOUR MILEAGE MAY VARY.
Embeddings DO change how your prompt works, they work with token vectors.
THESE ARE EMBEDDINGS and NOT Loras. These MAY OR MAY NOT work on NoobAI/NAIXL.
These are built upon different versions of "NEGATIVE" nerfs and quirks, and may also include files that relate to other models.
SafeNeg: Should keep things "SAFER" but it's Illustrious and even blanking your prompt gives weirdly... shudders - It should prevent some of the obvious but it's an upgrade and gives some quality upgrades too.
Illust_Neg - ALL ROUNDER Negative, use case for most things think of it as an "EASY NEGaTIVE"
WestRlsm_Illls - Is meant to make things a bit more. realistic, use this in your negative if you need a little more diversity and more realism.
DialDownNegILLS - This is useable on it's own or in combination with the others. It should dial down a chunk of the "REALISTIC" weirdness of most Anime/Comic models. On semi realistic and painterly ones it can flatten the style a little more.
I'm bored of splitting my models and am going back to mushing them into one.
IF SOMEHOW you are missing one check this: https://ko-fi.com/s/d29850838b
Am available for commissions. Yes this is Duskfallcrew, but also Earth as well. Combined Powers makes more stupid!
Discord: https://discord.gg/HhBSvM9gBY
Twitch: https://twitch.tv/duskfallcrew
Bluesky: https://bsky.app/profile/duskfallcrew.bsky.social
Sponsor models here: https://ko-fi.com/duskfallcrew/commissions
Find backups here: https://huggingface.co/EarthnDusk/
Reclaiming our TensorArt Space: https://tensor.art/u/611011406535381539
LoraTrainer: https://github.com/Ktiseos-Nyx/Lora_Easy_Training_Jupyter
Metadata Reader: https://github.com/Ktiseos-Nyx/Dataset-Tools
Find us on Arc Enc Ciel: https://arcenciel.io/users/77
PixAI: https://pixai.art/@ktiseosnyx/artworks/models
(Copyright 2025, 0FTH3N1GHT Productions, Earth & Dusk Media & Ktiseos Nyx)
Description
When you're doing FF styled or even Furry stuff with tribal or facial marks, this will yank tattoos off and clean up your tribal/facial marks. Sometimes Facial marks give EXTRA tattoos depending on the model.
FAQ
Comments (3)
Could you create a dedicated version specifically designed to remove those baffling dialogue boxes, stray text, and artist watermarks?
there's yolov8 models for that, the more you tel lillustrious NOT to have that it does it anyways, you can't train an embedding nor use one to remove that much text, embeddings only work SO far before you corrupt the prompt.
@ktiseos_nyx Aye. it can be quite annoying. At the end of the day Illustrious is trained on a dataset containing a lot of drawings that include this. So it is deeply backed into the foundation of it. One thing in prompting a lot of people don't quite get is that mentioning something even in the negatives will often make it more likely to appear in the image. Just in a way modified by the prompt.
So someting like "speach bubble" in the negative will mostly get rid of the standard speach bubble. But now the concept of text on the image is in play and you will sudenly have text boxes or floating text pop up with way higher likelyhood. Because these things are connected conceptually.
That makes training an embedding to get rid of it exceedingly difficult as well. Because the prompts tor the images might very well triger these conceptual connections and create unexpected results. And that can happen with a lot of different things. That is why I tend to be very lean on my negatives. And try to be smart in how I write a prompt to avoid certan keyword combinations that might trigger an unwanted concept.
On the other hand? If one has a good understanding on how concepts in the model relate to each other, some very interesting stuff can be pulled out of the model. Even concepts that are not represented with any distinct keywords but emerge from specific combinations and arrangements of the prompt. Allowing to do stuff like circumventing overly powerful concepts by having them emerge in a weaker and more mallable form by prompting for concepts that are related and lead the model to it without actually prompting for it specifically.





