This embedding will tell you what is REALLY DISGUSTING🤢🤮
So please put it in negative prompt😜
⚠This model is not trained for SDXL and may bring undesired results when used in SDXL.
If you use SDXL, recommended this 👇
another deep-negative:
pony version: https://civarchive.com/models/831971
SDXL version: https://civarchive.com/models/407448
TOP Q&A
how to use TI model?
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion
what is negative prompt?
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt
[Special Reminder] If your webui reports the following errors:
- CUDA: CUDA error: device-side assert triggered
- Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed
- XXX object has no attribute 'text_cond'
Please try using a model version other than 75T.
> The reason is that many scripts do not handle overly long negative prompt words (greater than 75 tokens) properly, so choosing a smaller token version can improve this situation.
[Update:230120] What does it do?
These embedding learn what disgusting compositions and color patterns are, including faulty human anatomy, offensive color schemes, upside-down spatial structures, and more. Placing it in the negative can go a long way to avoiding these things.
-
What is 2T 4T 16T 32T?
Number of vectors per token
[Update:230120] What is 64T 75T?
64T: Train over 30,000 steps on mixed datasets.
75T: embedding limit maximum size, training 10,000 steps on a special dataset (generated by many different sd models and special reverse processing)
Which one should choose?
75T: The most ”easy to use“ embedding, which is trained from its accurate dataset created in a special way with almost no side effects. And it contains enough information to cover various usage scenarios. But for some "good-trained-model" may hard to effect
and, change about may be subtle and not drastic enough.
64T: It works for all models, but has side effect. so, some tuning is required to find the best weight. recommend: [( NG_DeepNegative_V1_64T :0.9) :0.1]
32T: Useful, but too more
16T: Reduces the chance of drawing bad anatomy, but may draw ugly faces. Suitable for raising architecture level.
4T: Reduces the chance of drawing bad anatomy, but has a little effect on light and shadow
2T: ”easy to use“ like T75, but just a little effect
Suggestion
Because this embedding is learning how to create disgusting concepts, it cannot improve the picture quality accurately, so it is best used with (worst quality, low quality, logo, text, watermark, username) these negative prompts.
Of course, it is completely fine to use with other similar negative embeddings.
More examples and tests
draw building: https://imgur.com/5aX9yrP
hand fix: https://imgur.com/rDlsrgS
portrait (with PureErosFace): https://imgur.com/1Lqq595 https://imgur.com/V5kXBXz
fusion body fix:
How is it work?
I tried to make SD learn what is really disgusting with deepdream algorithm, the dataset is imagenet-mini (1000 images chosen randomly from the dataset again)
deepdream is REALLLLLLLLLLLLLLLLLLLLLY disgusting 🤮 and process of training this model really made me experience physical discomfort 😂
Backup
Description
put it in negative prompts
FAQ
Comments (117)
This is great, can you recommend more textural inversions or settings to improve limbs, fingers etc?
If you want to draw human body accurately, I recommend you use img2img. Any prompt engineering is cheap
AI: prompt is cheap, show me your photo 😏
I keep getting mutants. Waiting for an update. Using NG_DeepNegative_V1_75T
what happened? Do you have picture, upload one? or what prompts and model
I'd like to ask - what does [(embedding_name:0.9):0.2] mean? Is it to use 90% of the strength of the embedding at just 20% of this 90%?
nonono, "()" and "[]" have different functions
this is wiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#attentionemphasis
It looks complicated but actually consists of three parts
A = embedding_name => this very simple, meaning to use this embedding
B = (A:0.9) => use A, but reduce A's (attention) weight to 90%
C = [B:0.2] = [:B:0.2] = [from empty slot : to B prompt : 0.2] => add B to prompts when 20% of total steps
So, if your total steps is 45, then, [(embed:0.9):0.2] means: use 90% of "embed“ after the (45 * 0.2 = 9) step
The advantage of using "Prompt editing" is that you can not let embedding interfere with the early composition steps, but only fix some errors
@FapMagi Oh, thank you! The whole new spectrum of possibilities opens! :)
Nice, thank you !
can I combine this with EasyNegative?
yep, you can
First of all, I didn't test it, but I think it should be no problem to use it together, not only easynegative, but also bad prompt version2 and any other negative embedding, it is also possible to use it together
Note however input more and longer the negative prompts, will to less space your model "dream"
Rather than learning a negative prompt by training for a textual inversion of unwanted things, perhaps it would be useful to amass a collection of good looking images with CLIP Interrogator captions for each, and then optimize the textual inversion for a negative prompt that makes the captions better align to their images? Since the "negative" prompt is more of a base from which the conditional output is extrapolated away from, and doing this might result in a better "base".
adding this to my negative prompt changes all my male subjects into female
😂
you sure? I did not design this embedding training dataset specifically for female.
Can you provide more information, I am very curious (does it mean that AI(model) decides that men are disgusting subject, hahaha 😂)
what model/prompt
PROGRESS
@FapMagi well I guess us men are indeed disgusting in one way or another lmao
seems men are ugly
Hi, Where do i put this and how do i use it. I never had one of these before.
Why does using this result in washed out colors with AbyssOrange?
because of the VAE used. Select MSE 84k VAE to fix
what do you mean by putting it in negative prompts? Do you mean in the negative text box ? so what is the download file for? you need to put that file somewhere right ?
In your stable diffusion folder there should be a subfolder called "embeddings". The download file should be placed in there, and then you can use this embedding by using the trigger word in the negative prompt text box.
@piscisi thank you <3
@no_ For this specific embedding the trigger word is "ng_deepnegative_v1_xxt", where "xx" is dependent on the the version you downloaded. For example, for the 75T version the trigger word is "ng_deepnegative_v1_75t". You can see that on the right of the webpage in the information table.
@piscisi so this is totally different than the models we normally download like Dreamlike or Deliberate? They go to models and these goes to embeddings folder?
Btw sorry i am asking here, but, whats checkpoint? 😬 i am newbie so...
@aimakeart849 Yes, textual inversion embedding (*.pt) and checkpoint/model/weight (*.ckpt or *.safetensor) are total different things.
SD models go to stable-diffusion-webui\models\stable-diffusion\ but embeddings go to stable-diffusion-webui\embeddings\
You use textual inversion by put its file name into the prompt (or negative prompt like this case). For example "ng_deepnegative_v1_75t.pt" get called by adding text "ng_deepnegative_v1_75t" (no extension .pt) in negative text input field. The file name can be changed and you still be able to use it by put exact name in text box (doesn't matter what original name you download it). But keep original name for easy tracking reason only (like make sure you remember you downloaded something already before).
@nnq2603 Got it, thanks 😊
Can you tell me what negative tags are included in this Deep Negative?
ng_deepnegative_v1_75t
What's the difference between pickletensors and safetensors? Is there any different on how to use them?
Pickle - can be pickled aka virus
Safe - you're safe
One is full of trojans, one ain't
@anaccountfordrawing Do you know if theres a way to convert .pt to safetensors? I convert ckpt to safetensors but idk how to do .pt
Sorry if I missed this but what's the difference between this and easy negative?
I have difficulties in installing these models and do inference on my computer, does anyone has any advices?
@maxwellcarey7777 This is a Textual Inversion, not a model. Click the question mark in the Type box in Details to understand how to use it.
please upload safetensor file
Can one use this with EasyNegative and Boring_E621?
Or is having too many texture inversions a bad thing?
I've been doing it for a while now and it works good.
It can make it harder to do weird inpaintings. Like if you need to remove something from the background, I usually don't use any negatives or it will try to draw something where its suppose to be blank wall.
The more "negatives" made by who'knwos'who , the more you'll have generations you don't understand anything about. Don't use any of those so called negatives. Trust your prompting skills. And do your own.
hi, i use 'Artroom AI' as the SD's interface, i dont have a Embeddings folder to put this file in , what do i do?
Textual Inversions
Download the textual inversion
Place the textual inversion inside the embeddings directory of your AUTOMATIC1111 Web UI instance
Use the trained keyword in a prompt (listed on the textual inversion's page)
Make awesome images!
Where can I find the textual inversion page?
Hey how exactly do I use this?
Put them in the "Embeddings" folder and activate them by using the filename in your prompt.
Example: I have the Photohelper.pt file in my Embeddings folder. My prompt might be something like "A photo of a samurai, PhotoHelper."
Hope this helps.
This is a really good embedding, but could you also make a safetensor variant?
Are there any other safetensor embeddings than easynegative? That is the only one I have been able to locate with st filetype. If you know of any other please let me know, or give me a clue so I can find them :) Thanks
How are these trained? Do you use images, tags, both? Would you be willing to share your data set, or a portion of it? I want to train something similar for a specific character Lora I am working on
The idea of using an alternative image generation system (deepdream) to create negative embeddings is extremely funny to me. It feels like drama, even though it isn't.
Deepdream is so extremely primitive by now and you can't argue with results. If it works, it works.
im using deforum via google colab notebook and idk if its possible to use textual inversions with that can anyone tell if and how its possible? btw im using this method bc i have an amd gpu
Just save everything on google drive then find folder embedings and paste it.
It's available on Stable Horde as well and does everything for you.
what folder do you put it in ?
A folder named 'embeddings'.
@MaoYouShiJie thanks。
hi, i use 'Artroom AI' as the SD's interface, i dont have a Embeddings folder, what do i do?
@jeniliadcruz514238 make a folder
With a RTX3070, I've got this msg from after adding it to embeddings using Stable Diffusion for ng_deepnegative_v1_75t. RuntimeError: The size of tensor a (137) must match the size of tensor b (77) at non-singleton dimension 1 For ng_deepnegative_v1_32t, I've gotten this msg when used in deep negative: RuntimeError: The size of tensor a (94) must match the size of tensor b (77) at non-singleton dimension 1
I notice ng_deepnegative_v1_75t is exactly 75 terms, so is for instance the negative embedding kkw-new-neg-v1.4, if I combine these without a comma, I end up with 150 terms, but if I add a comma it jumps to 225 terms. Is a comma not required between textual embeddings?
To answer my own question, It appears both end with BREAK which causes the jump, so my next question was, is a comma needed after a BREAK, and therefore I found https://www.reddit.com/r/StableDiffusion/comments/14xnfx4/in_summary_stable_diffusion_doesnt_really_care/ which seems to indicate stable diffusion doesn't really care about comma's, though I doubt it, since comma's do matter in the token count.
@pico0obyte442 thanks for posting this follow up! That actually answered my curiosity about why this always tends to cause token count to be at a division of 75. In my limited experience, commas matter but not as much as we would like. It helps steer images toward keeping a concept together, but if you type "blue shirt, yellow walls" you're definitely going to get some yellow shirts and blue walls.You probably already know all this, just adding some notes for the next person who comes along with questions. I haven't used BREAKs yet. I'm going to have to give them a try now that I understand what they do a little better.
The regional; prompter stable diffusion extension allows limiting colors to regions. For ComfyUI there are similar custom nodes, I use regional mask, condition that and combine those. In ComfyUI the combining is tricky though, there is concat and combine, I think you require the latter. Some overlap is allowed but you have to work your way to covering the entire image, or you get gray images. Coordinates for the mask from top left.
@pico0obyte442 It care about commas, both in terms of the quantitative token difference that is clear as well as the qualitative difference when they (or other means of separating terms, LORAs, embeddings and whatever else in your prompts) are or are not used. Not a "scientific" means of determining a difference, sure that may be so but science accepts the dubious ontology of intuition in formulating its notion of the "hypothesis" so often steering results in a skewed direction so I challenge all to consider the bigger picture any time additional context is available as science is rather fundamentally flawed even if sometimes it gets things right (only to discover that was wrong a generation later, except Einstein! They would rather invent invisible matter and energy than consider he may have been wrong... totally not a religion, right!).
The reason here is obvious enough when we come back down from outer space a bit and remember that we are prompting the AI in natural language afterwards and as once wisely opined, there is a distinct difference between these two descriptions of the behavior of panda bears:
1) Eats shoots and leaves - this indicates to us that the panda bear eats the shoots and leaves of plants (inference suggests the plant is bamboo)
2) Eats, shoots and leaves - this indicates the panda arrives at a restaurant, eats his dinner, stands up as he fires a gun into the air and then departs skipping the bill.
Now the AI is a bit more versatile than your average grammar Nazi so it also accepts "Eats | Shoots | and Leaves" or any variation you can dream of replacing the pipe symbol, "|" with any other non-pronounceable symbol or other means of indicating a separation between the two concepts that can be readily imagined in natural language. That is because of how we are interfacing with it, by our own design since the thing is human technology after all.
@ganapatisaraswati803 thanks for the discussion, Although the comma has its functions I find it rather costly. I seem to get better results omitting comma's and placing words in an order that seems it can be interpreted in multiple ways that all contribute to the overall picture. Also using the simple words that count as one token helps and I think each uniquely. And then placing certain negative associations far apart.
This last, I think has to do with the cfg scale, which is kind of censoring, in layers. I recently found that this can be circumvented, and now I can create a prompt with high cfg values. It requires several denoising steps and is quite tedious, but I'll post an image as a proof of concept (Edit: not a bad prompt, I decided).
I got an rancid image I didn't ask at low cfg once, which prompted me into investigating this.
so.. 64T promot is ng_deepnegative_v1_75t?
its not showing up for me in auto1111 so... im not sure
@Jax1492 checkout this https://civitai.com/models/407448/deepnegativexl
If you use the sdxl model, you need to use an embedding specially trained for sdxl. Embeddings trained for 1.5 will be hidden when you use sdxl models.
@lenML Thanks! It is showing up now.
so... what does this do?
I think it removes the bag things you don't want showing up, like when you ask for a front facing image and the arms are drawn backwards
@cadman yep. It's a general purpose negative embedding. Add this to your embeddings and put it in the negative prompt. Just gotta play with it. Try some prompts with and without it. It's a tiny addition that can make a big difference.
cool
I have a little question and that is, when i use 75T, its like it insist to make the characters hold a stick, A broom, a stick, a wand, sometimes a duster. Is there a way to prevent that?, one day the character ended up dual wielding brooms xD. ive been triying to add as negative every item, and even like No holding item at all, but it didnt work, any suggestions? please :3
LOL is that where that was coming from!? I saw that on some of my images but just thought maybe it was something odd with the checkpoint or one of the LORAs I was using. It was pretty easy to steer away from that with a little prompting so I just chalked it up to AI being AI.
@Cymathers Yeah i was surprised too ngl, I managed to make it almost dissapear using in negative things like "Brooms, weapons, polearms" xD. What did u used in your prompt to fix it? :o (of course if is possible to know :3)
LMAO it's happening to me too! why D:
Love this one. I've cross tested against a lot of other popular negative embeddings and this one is the most consistent and versatile for me.
I use Fooocus and I think the only way to use embedding is to put code in the prompt and the file in the proper folder, correct? Do I need to select from the Lora menu?
Using this now on Fooocus. The way to do it is to put the negative embeddings into the \Fooocus\models\embeddings folder. You activate the negative embedding by including its keyword in the negative prompt on the 'Setting' menu at the bottom. In this case 'ng_deepnegative_v1_75t'. I keep a set of negative prompts in a google doc to copy and paste in depending on what I'm generating. This one goes in every negative script for me.
I've downloaded it long time ago but now all my texture inversion are gone but there's file still in embedding folder and also some lora file to. Can anyone know how to fix it?
try hitting refresh in the section of the webui
I suspect the latest one is for SDXL, while others are for 1.5. A1111 appears to use the model you have picked, and filters out the embedding you cannot use for that model.
If you are using sdxl model, try this:
I don't see it in Textual inversion ! Can someone help me ? I refresh and restart and nothing changes
If you are using a1111/webui, try putting "ng_deepnegative_v1_75t" directly into the negative prompt. if No surprise, it will work. There is a bug in webui, and sometimes the *.pt model cannot be loaded...
If you are using sdxl model, try this:
https://civitai.com/models/407448/deepnegativexl?modelVersionId=454217
So when I add key word in negative prompts and press generate, it shows me an error: "RuntimeError: expected scalar type Half but found Float". What did I do wrong?
Does it work with ComfyUI?
Of course,man
@Gnep0 Hello, how to operate and use in comfy? In which folder is it placed and how to access it?
@zdwork662 também queria saber
@zdwork662 Put it in \ComfyUI-v1.x(your version)\mkodels\embeddings.Enter "embedding:Deep Nagative" in the negative prompts input box when you use it.
update it pony ver pls
TY, great worck =)
thanks
pony?
pony version:
a need
best for semi realistic
I'm trying to use: https://huggingface.co/spaces/safetensors/convert
to get this embedding and a few others converted to Safetensors.
And since I'm a rank amateur; I'm yoyo-ing across various levels between frustrated and irate, trying to figure it out.
Edit: asking with a cooler head - Is there a safetensor version, or better yet; will there be one?
Okey, I uploaded the safetensors version (only 75t)
@lenML Thank you, sincerely, for taking the time to upload this.
does work for WAI-NSFW-illustrious ?
coooool, new base model? I'll try it soon
No it does not, used to but now it just cant generate any image now
Not working
Apparently marked as an unstable resource on the generator. Anyone know if this'll get fixed?
Is there any problem with using this in Illu 1.1 with 1536x1536? (Sorry im new)
wat does it do?
thanks! used it for my nsfw discord bot and the results are great lol!! https://discord.gg/aPjzE2JK
Adding this embedding now causes the onsite generator to red out and not generate images anymore. Anyone know the specifics of what causes this? Or is it just me?
This is so broken. I was been able to use "Deep Negative" on ilustrious, like an Furrytoon mix, but now i cannot. Why?
does this thing work what's it do? can it crash cumfyUI? I was considering downloading it for a moment.....
Details
Files
ng_deepnegative_v1_64t.pt
Mirrors
ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
4629_5638_ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
ng_deepnegative_v1_64t.pt
NG_DeepNegative_V1_64T.pt
ng_deepnegative_v1_64t.pt
NG_DeepNegative_V1_64T.pt



