CivArchive
    Deep Negative V1.x - V1 16T
    NSFW
    Preview 40845
    Preview 40811
    Preview 40810

    This embedding will tell you what is REALLY DISGUSTING🤢🤮

    So please put it in negative prompt😜

    ⚠This model is not trained for SDXL and may bring undesired results when used in SDXL.

    If you use SDXL, recommended this 👇

    another deep-negative:

    TOP Q&A

    • how to use TI model?

    https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion

    • what is negative prompt?

    https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt

    [Special Reminder] If your webui reports the following errors:

    - CUDA: CUDA error: device-side assert triggered

    - Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed

    - XXX object has no attribute 'text_cond'

    Please try using a model version other than 75T.

    > The reason is that many scripts do not handle overly long negative prompt words (greater than 75 tokens) properly, so choosing a smaller token version can improve this situation.

    [Update:230120] What does it do?

    These embedding learn what disgusting compositions and color patterns are, including faulty human anatomy, offensive color schemes, upside-down spatial structures, and more. Placing it in the negative can go a long way to avoiding these things.

    -

    What is 2T 4T 16T 32T?

    Number of vectors per token

    [Update:230120] What is 64T 75T?

    64T: Train over 30,000 steps on mixed datasets.

    75T: embedding limit maximum size, training 10,000 steps on a special dataset (generated by many different sd models and special reverse processing)

    Which one should choose?

    • 75T: The most ”easy to use“ embedding, which is trained from its accurate dataset created in a special way with almost no side effects. And it contains enough information to cover various usage scenarios. But for some "good-trained-model" may hard to effect

      and, change about may be subtle and not drastic enough.

    • 64T: It works for all models, but has side effect. so, some tuning is required to find the best weight. recommend: [( NG_DeepNegative_V1_64T :0.9) :0.1]

    • 32T: Useful, but too more

    • 16T: Reduces the chance of drawing bad anatomy, but may draw ugly faces. Suitable for raising architecture level.

    • 4T: Reduces the chance of drawing bad anatomy, but has a little effect on light and shadow

    • 2T: ”easy to use“ like T75, but just a little effect

    Suggestion

    Because this embedding is learning how to create disgusting concepts, it cannot improve the picture quality accurately, so it is best used with (worst quality, low quality, logo, text, watermark, username) these negative prompts.

    Of course, it is completely fine to use with other similar negative embeddings.

    More examples and tests

    How is it work?

    I tried to make SD learn what is really disgusting with deepdream algorithm, the dataset is imagenet-mini (1000 images chosen randomly from the dataset again)

    deepdream is REALLLLLLLLLLLLLLLLLLLLLY disgusting 🤮 and process of training this model really made me experience physical discomfort 😂

    Backup

    https://huggingface.co/lenML/DeepNegative/tree/main

    Description

    put it in negative prompts

    FAQ

    Comments (15)

    openglsJan 17, 2023
    CivitAI

    Any chance of making one for 2.1? 2.x-based models are the ones which suffer the most from distortions and which necessitate of negative prompt engineering the most.

    lenML
    Author
    Jan 17, 2023· 3 reactions

    will but not now~👌

    I haven't found the right way to training it. truth be told,current training is actually aimless... (Im surprised it can actually work🤣)

    v1 version is just an experimental output, now need more test more reseach

    JahokuJan 17, 2023· 1 reaction
    CivitAI

    Testing V1 2T seems to cause small issues with eyes, maybe you could blur or smudge the eyes so it isn't creating such a strong negative towards normalish results.

    minimalistJan 17, 2023

    and I thought that yesterday my models had miracles with their eyes)) All oblique, crooked, a look only in the vicinity is possible)) And I put this negative promt and forgot))

    lenML
    Author
    Jan 18, 2023

    @minimalist what sd-model you using? i am testing it

    minimalistJan 18, 2023

    @FapMagi Elldreth's Lucid Mix

    adryJan 17, 2023
    CivitAI

    I don't understand the weights combinations in your sample pictures.

    lenML
    Author
    Jan 18, 2023

    Are you ask to the meaning of different coordinates? I did omit to write...

    it is prompt-editing and attention weight

    https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#attentionemphasis

    TL;DR: top-left is weakest effect, bottom-right is highest

    wewewewJan 20, 2023· 2 reactions

    If you're talking about "[( NG_DeepNegative_V1_64T :0.9) :0.1]", the square brackets signify that this part of the prompt is only added after 10% of the steps are done, in other words it has no effect for the first 2 steps if you have 24 steps or so. If you have 2 ":" in the brackets, then it switches from the first to the second part at the step indicated at the end. It is kind of confusing since it looks exactly like attention weights.

    IDEIYIONIFeb 9, 2023

    @wewewew Thanks for the response, would you mind explaining hot to achieve such matrix ? I guess one would use the S/R function in x/y/z plot script, but I can't figure an easy way how to so.

    wewewewFeb 9, 2023· 1 reaction

    @IDEIYIONI I don't know what you mean, this isn't related to prompt matrix or scripts. It's a default feature that triggers normally. If you set the steps at 20 for example, a prompt is sent to the AI each of those 20 times, and say you put "picture of a [car:horse:0.2] in a field" as your prompt, the first 4 steps the AI will receive "picture of a car in a field", and then car will be switched to horse for the rest. You can enable the option to display in-progress images every step, and you might see a vague car switch into a horse, or you might not because it's pretty hard for the AI to modify the content that much unless you set it to change at step 2 or 3. Similarly you could put [car|horse] which switches it every step.

    adobiFeb 14, 2023

    @wewewew i was just talking about this same confusion with a friend two days ago because he was using [prompt:n] as demphasis weights and almost convinced me it worked the same before going over docs

    VraethrDalkrJan 17, 2023· 1 reaction
    CivitAI

    Trust me! Using NG_DeepNegative_V1_4T as the only (positive) prompt returns cute critters with aEros model!

    lenML
    Author
    Jan 18, 2023

    WTF!!!🐵

    I thought you were joking until I actually tried it just now. While the output is still disgusting, it's actually a whole creature😨

    it Like the aliens I've seen in my dreams! (this is a joke)

    aEros model is very very different from other

    askFeb 5, 2023· 2 reactions

    Post one :D