AbyssOrangeMix2 (AOM2) HuggingFace Model Card
Credit for this model goes to the original author(s) and the maintainer on HuggingFace, WarriorMama777. Also, check out their profile on Civitai!
Compatible ControlNets available for this model at Difference ControlNets.
For less hardcore NSFW, see NSFW. For SFW, see SFW. For more of an anime style, check out BloodOrangeMix - Hardcore. For more of an illustrated/painted style, see EerieOrangeMix2 - Hardcore.
Additional Request
Please clearly indicate where modifications have been made if used as a basis for any further work (i.e training, merging/mixing, or extraction), and if used for merging/mixing, please state what steps (i.e. recipe) you took to do so.
See https://huggingface.co/WarriorMama777/OrangeMixs#abyssorangemix2-aom2 for a detailed explanation of the model, recipes, and tips on how to best use it.
Additional notes - Using the SD MSE VAE (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) and the Obsidian Skin LoCon (LyCORIS), available at https://civarchive.com/models/18584/obsidian-skin for the dark skinned preview image.
Description
This is an inpainting version of the model and is only meant for use with img2img and inpainting. This should NOT be used with txt2img and likely any attempt to will fail with an error.
Orangemix VAE is baked in.
Additional notes - Using the SD MSE VAE (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) and the Obsidian Skin LoCon (LyCORIS), available at https://civitai.com/models/18584/obsidian-skin for the dark skinned preview image.
FAQ
Comments (94)
Hello! I downloaded the pruned version. is the VAE file need to be on models SD folder or the VAE folder?
Hello @FefaAI ,
The VAE should go into \models\VAE, then you can select it via the UI using Settings -> Stable Diffusion -> SD VAE.
@Havoc Thank you! One more question... to turn VAE selection as "Automatic" in SD web ui, I need to rename as the same name of checkpoint merge?
Hello @FefaAI ,
That is one way, yes, in the case of the pruned versions here, I have baked the VAE into them, so setting it to automatic should use the orangemix VAE in the model automatically. If the colors look faded, that would indicate that the orangemix VAE likely isn't in use and you may need to tweak the setting to specifically use the VAE, or use the rename after the model name approach.
does this use danbooru tags?
Hello @whywhynot ,
Yes, this model leverages danbooru tags.
@Havoc And what happens if I enable a different VAE even though one is baked in?
Hello @whywhynot ,
It will use the VAE you select, it will not use both or combine them.
@Havoc I'm noticing unless I use a different VAE, the colors look very washed out, bland
@whywhynot the pruned models have the VAE baked in, the unpruned ones do not. It will only use the baked in VAE if set to automatic or none (none just doesn't try to find a VAE matching the models name). I will look at uploading unpruned ones with VAE embedded if there is a desire for them. If you are facing issues even with the pruned ones, then manually selecting the VAE should work fine as well.
Also, do be aware that the other main VAEs, i.e. SD's MSE or WD 1.4 (or their dozens of renamed instances), tend to greatly over saturate colors by comparison to the orangemix VAE. If you prefer that, then you would need to manually use one of those, otherwise using (monochrome:1.1) and (greyscale) in the negative prompt will improve the colors as well.
@Havoc OK, thanks for the tips
When I use the model every image I make comes out de-saturated
@laughterthoughts did you download a pruned version? Those have the VAE baked in, if you are using the unpruned version, you need to download the VAE and select it manually. In addition, use (monochrome:1.1) and (grayscale) in your negative prompts for better color.
@Havoc i downloaded the 5gb version
@laughterthoughts ah, yeah, that one requires you to manually select the VAE. The pruned versions have the same quality level and include the VAE, so you can leave it on automatic for those.
@Havoc Where can I download the VAE
@laughterthoughts if you click the dropdown next to download latest, it is available there. Here is a direct link - https://civitai.com/api/download/models/5038?type=VAE&format=Other
@Havoc do i just implement it by putting it in vae folder inside models
@laughterthoughts place it into \models\VAE and select it in the UI via settings -> stable diffusion -> SD VAE.
Can't seem to remove blushing / red cheeks no matter how extreme I make the negative prompts. Any ideas?
Hey @kurokuro ,
Negating blush seems to work for me, might need to emphasize it more, such as (blush:1.1) or even (blush:1.2). It won't 100% remove it, likely the model is over fitted on that trait, then you could easily remove it entirely via post processing if you really want it all gone. Certain skin tones might also not have as pronounced. In addition, if using an VAE like WD 1.4 or SD MSE, those will over saturate everything, making the blush more visible.
@Havoc Hey, thanks for the reply. I've tried up to blush:1.8, plus additional negatives for pink cheeks, red cheeks, blushing, and so on. I'm using the pruned model with the integrated orange VAE here. Can you think of anything else I can try? This ends up being an important attribute for the stuff I'm working on; I super dig this model but it makes it a bit hard to use as a result.
@Havoc As an extra footnote, my daily driver is the bloodorange mix from WarriorMama777's HF card, and I haven't noticed the same issue happening there (blush is easy to adjust).
@kurokuro I'll try digging into more when I get a chance to see if there is something we are missing that might address it. The models and mix recipes involved are different from BloodOrange, so it's behavior being different makes sense.
@Havoc Dope, thank you. Super appreciate it.
@kurokuro apologies about the delay, but WarriorMama777 did make some recommendations on tags to try negative prompting to remove blush. These were for AOM3, but I'd imagine they should work here as well.
(blush, embarrassed, nose blush, light blush, full-face blush:1.4),
@Havoc Interesting! Thanks for following up here, I'll give those a shot.
this model is gorgeous, but im curious how to get my photos to have a higher saturation and a less oil paint type of look.
Hello @taiyakiss ,
Saturation
First, make sure you are using the correct VAE, either the orangemix one manually, or by using one of the pruned models which has it baked in (so setting SD VAE to automatic will use it).
Next, I'd recommend negative prompting (monochrome:1.1), (greyscale) to improve colors.
Finally, if that still isn't enough, I recommend using the SD MSE OR WD 1.4 VAE, which will (in my opinion overly do so) saturate the colors.
SD MSE - https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors
WD 1.4 - https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt
Style
Positive prompting realistic can greatly enhance the realism and 3d appearance of images. photorealistic and 3d are also tags to try. The amount of empahsis depends on the prompt and settings, so experiment with higher or lower amounts of it e.x. (realistic:0.8) or (realistic:1.2).
Hope this helps!
Please tell me, I downloaded the VAE after the selection of automatic discovery or good dark, you must manually select, this problem can be solved?
@NiConfucius
These instructions assume you are using Automatic1111, I am unable to guide for other UIs.
1. Download https://civitai.com/api/download/models/5038?type=VAE&format=Other
2. Place into ..\stable-diffusion-webui\models\vae
3. Select in the UI via Settings -> Stable Diffusion -> SD VAE
Hi ! I try to reproduce one of the picture you made for this model (the blue haired girl), by copying the generation data, but I ended with a slightly different result, with a lot more fingers on her hand. Any idea were it might come from?
I fell like the problem comes from the upscaling process.
Hello @Drai ,
So there might be a couple of factors at play here. Assuming you are using the baked in VAE or the orangemix VAE seperately (as fine details can be affected by the VAE). This is also assuming you use are using the Automatic1111 UI.
1. I am using xformers (and perhaps you are too), which introduces a small amount of non-determinism, i.e. pictures can vary a small amount given the same seed, prompt, and other generation parameters.
2. I am using a setting in the UI that might slightly change results, you can find it here in the UI:
Settings -> Stable Diffusion -> "Enable quantization in K samplers for sharper and cleaner results. This may change existing seeds. Requires restart to apply."
Outside of those, what you can try is using variation seeds to introduce small amounts of change on a given primary seed. In the Automatic1111 UI, this is enabled via the Extra checkbox, then you just allow the variation seed to be random (-1) and set it to have a small amount of strength (0 being no change, 1 being completely changed, so starting at 0.05 or less might be effective).
All upscaling was done via hires fix using latent (nearest) or latent (nearest-exact). None of my examples leverage any img2img, inpainting, or post processing.
@Havoc Thanks a lot for answering so fast ! I will look into it, it's a gold mine for a beginner like me !
The model NSFW is better to make "SFW" image than hardcore? Someone had the experience?
Hello @DenisKeni ,
Yes, NSFW would typically be better for SFW, as hardcore will favor more sexual images.
Quality may be better with NSFW for SFW images as well, as hardcore introduces a model which, while better for hardcore and hentai, may not be as good for cases where you do not want that.
Hello, I installed this model alongside the orangemix.vae.pt and tried to replicate the prompts provided in the gallery, however I am getting mixed results. I'm not sure how to embed photos in comments, but my text output is as follows:
absurdres, 1girl, blue hair, sports bra, fellatio, (penis:0.9), Abyss
Negative prompt: (worst quality:1.2), (low quality:1.2), (lowres:1.1), (monochrome:1.1), (greyscale), multiple views, comic, sketch, animal ears, pointy ears,
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 174470057, Size: 512x512, Model hash: e714ee20aa, Model: abyssorangemix2_Hard
https://ibb.co/ynrnXdZ <-- image link
Is there something I am missing to bring my model up to speed? I only have the orangemix and abyssorangemix2 files in my stable-diffusion folder.
Hello @bententhhousand ,
To replicate, you actually want to click the copy button and paste all of it into the positive prompt in Automatic1111 (replacing all of the content there if there is any), then click the little blue button with a white arrow in it that is to the right of the positive prompt input, or under the generate button.
I have also pasted the full generation parameters below, hope this helps!
absurdres, 1girl, blue hair, sports bra, fellatio, (penis:0.9)
Negative prompt: (worst quality:1.2), (low quality:1.2), (lowres:1.1), (monochrome:1.1), (greyscale), multiple views, comic, sketch, animal ears, pointy ears,
ENSD: 31337, Size: 512x512, Seed: 174470057, Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Clip skip: 2, Model hash: cadf2c6654, Hires upscale: 2, Hires upscaler: Latent (nearest), Denoising strength: 0.6
The upscaling is likely the critical piece as to why the results are so different, but I also did not use Abyss in my positive.
I also generated some interesting pics from this model! Awesome!
Im not getting quite the right results when trying to reproduce the images with the attached info. Not sure why, any help troubleshooting would be appreciated
Hey @Evergale ,
The best way to replicate an image, assuming you are using the automatic1111 UI, is to click "Copy Generation Data" and paste it into the positive prompt (overwriting anything currently there) in the automatic1111 UI, THEN click the blue button with an arrow to the right of the positive prompt (or under the generate button), this will automatically map all the generation parameters for you.
@Havoc Is there a way to test for a specific problem in the setup process? As I understand it, if I am using the same model, vae, and generation data, it should produce the exact same image, correct?
Hello, im having an absolute blast with this model, im truly thankful that you put it together, just have a quick question:
when im doing img2img, the images get washed out colors, specially at lower denoising strength values, so when i want to do very slight modifications, the colors get completely washed out, also happens with inpainting, should note that when the image is processing i can see the colors are vibrant, but when it ends they get washed.
any advice would be appreciated (also happens with aom3)
Hey @msiaigens ,
So, make sure you are carrying over the negative prompts using txt2img for when you do img2img as far as the quality, resolution, and color related ones. The other thing to try is the setting in the UI called "Apply color correction to img2img results to match original colors." I'm not sure the exact menu this is under, but you should be able to find it somewhere under the settings tab in the UI.
I am having problems reproducing your images, but only the ones that are not 512x512, this is really weird. I used the copy and paste and the blue arrow in automatic1111 as you stated multiple times. Do you have a clue what i should check?
for me it doesn't work even for the 512 images
@Juan_Hernandez @dilectiogames
Make sure you are using the correct Clip Skip and Eta noise seed deltas, as sometimes those seem like they aren't being mapped properly when mapping the generation parameters. These examples expect you to set Clip Skip (aka Stop At last layers of CLIP model) to 2 and Eta noise seed delta (ENSD) to 31337, this is typical of most anime models. These settings can be found under the settings tab in the UI. At this time I am unsure of where, as I have been using them via the quick settings for so long, I have forgotten, will reply again once I find where they would be. Other than that, make sure you are using the right model, that all of the generation parameters are the same, and that no extensions are interfering and changing anything.
for some reason I find it hard to trigger nsfw actions like (cowgirl)?. I have VAE installed too. any ideas?
Hello @evelene02192 ,
The VAE would only effect fine details and color. The key thing is that you need to use booru tags in your prompts, as if you don't use the full tag, it won't understand it correctly, or at all. For cowgirl, for example, you'd want to try including cowgirl position, straddling, sex, vaginal in your positive, and negating other positions by their tag names in the negative if you see conflicting positions occurring. Great (NSFW) reference resource for them, https://danbooru.donmai.us/wiki_pages/tag_groups.
@Havoc thanks alot for the reply!
The resulting output using 2:3 dimensions is of low quality. It is more likely to result in two-person arms connected without hands.
Hello @CreatorEx ,
The 2:3, e.x. 512x768, results are not being cherry picked with this model. If you are trying to directly generate at resolutions exceeding 512x512 too much, you will get incoherence, distortions, and abnormalities. The solution is to set a lower resolution for the first pass, then use hi resolution fix to upscale the image. Almost all models are trained on data 512x512 or smaller, as such they cannot effectively handle higher resolutions directly.
I can generate hundreds of images with the following settings, and not see the issues you are facing.
absurdres, 1girl
Negative prompt: (worst quality:1.2), (low quality:1.2), (lowres:1.1), (depth of field, bokeh, blurry:1.1),(motion lines, motion blur:1.1), (greyscale, monochrome:1.0),
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: -1, Size: 512x768, Model hash: e892703c61, Clip skip: 2, ENSD: 31337
It is best to copy generation parameters from examples for replication (i.e. copy them from the images that have the little info icon, clicking copy generation parameters button, and pasting that into your positive prompt, overwriting anything there, using the little blue button to the right to automatically map them), simply copying the prompts and then using your own configuration will lead to unexpected results.
@Havoc This is an example https://we.tl/t-0UXcgcQNuN, part of the reason should be bad_prompt, but I think there are other reasons. I tried reducing the resolution by half and upscaling by x2, still not ideal.
@CreatorEx I see what you mean, there is a better tag for framing that will capture more of their body, cowboy shot, reference (NSFW) https://danbooru.donmai.us/wiki_pages/cowboy_shot.
Try a prompt like this
absurdres, cowboy shot, (animal ears:0.1), large breasts
Negative prompt: (worst quality, low quality:1.3), (lowres), (depth of field, bokeh, blurry:1.3),(motion lines, motion blur:1.1), (greyscale, monochrome:1.0)
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2650797226, Size: 512x768, Model hash: cadf2c6654
can one do particular art styles (i.e. style from a particular anime studio or show)?
Look for a lora! I'm sure you have seen people producing various images with this checkpoint and other checkpoints, right? You can find example loras often when you look at the things people make and they care enough to share how they generated the image.
How to use a lora: <lora:filename:multiplier>
Make sure you have the <> and do not use any spaces.
For a lora for an anime style tarot card, you'd do this: in your positive prompts area. <lora:animetarotV51:1>
Here's that lora btw: https://civitai.com/images/322490?modelVersionId=28609&prioritizedUserIds=275939&period=AllTime&sort=Most+Reactions&limit=20 it's really pretty.
You can layer loras up to a point to mix styles together as well as poses. You can give higher weights to one and less to another to choose how much one influences another. it's pretty great. I'm still learning but I imagine this is how people make way more complex positions + facial expressions + poses.
how to fix the eyes?
how to fix anything : inpaint.
Hello @zichen , would need more detail. Using the right VAE can help, OrangeMix / NAI for more typical anime style, and the Stable Diffusion MSE one for more realistic images. In addition, I do not use or recommend using face restoration at all, it typically mangles faces. Upscaling images with sufficient d noising can also resolve issues with eyes.
where do I search for seeds, and how do I make characters exactly or at least similar to what they are? For example, I want to make Shiroko from Blue Archive, but the results are always different from what I really want, even though they're good
you should find loras for the character you want, right on this site, download them on the lora folder and you will get that character. A harder choice would be train your own Lora
you may google or search in YouTube for some Tutorials from Stable Diffusion. Like how to make own cosplayer. Normally, you need Lora module for the character you want.
Just a confirmation: the AOM2-hard pruned here, is a no-baked-vae, since the VAE is a separate download ? In HF, they are all baked-vae:
https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2/Pruned
why many peopel use SD1.5 instead of 2.0?
2.0 已经删除色情元素,
SD 2.X isn't really an upgrade for end users of models for image generation, it has some improvements, but nothing significant enough to justify it's use without some amazing new models being released. SD 1.5 ecosystem also has far more content and ongoing creation of new content than SD 2.X does.
1.5 is much better for character models. 2.x is better for landscapes etc.
2.0 cut out a lot of copyrighted content. Therefore, 2.0 creature draws non-photographic content worse.
用不了。。。
Doesn't work with the upscaler?
can it draw boys?
案例
Gustav Klimt
Whats the difference with amo2nsfw
iget this message NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Did you download VAE?
Did you try running it with suggested flags?
@zhzhzhz889 nah he just got a weak ass gpu
@fadedninna
It was solved for me in the following way:
1- Edit with Notepad++ or notepad the webui-user.bat file
2- edit, find "set COMMANDLINE_ARGS="--no-half-vae"
3- save all
@zhzhzhz889
It was solved for me in the following way:
1- Edit with Notepad++ or notepad the webui-user.bat file
2- edit, find "set COMMANDLINE_ARGS="--no-half-vae"
3- save all
if anyone strugling with this error, what solved this issue for me was putting these arguments: --lowvram --no-half
I appreciate the valiant effort below of showing what nsfw capabilities this model has, while striving to stay within civitai's sfw rules lol
Wow. With the right settings, this is a game changer.
Where is the VAE file? Its no longer downloadable.
Is there a way to reduce lighting effects?
Despite the fact that it was once a really good model and enjoyed well-deserved popularity, now it is outdated, it lacks neither flexibility nor diversity. However, in some places it still looks good.
This model is Great expressiveness. ALL model Still number one the current!!! AOM2hard XL Ver or Illustrious Ver plz <<(*_ _)>>
Details
Files
abyssorangemix2_Hard-inpainting.safetensors
Mirrors
abyssorangemix2_Hard-inpainting.safetensors
abyssorangemix2_Hard-inpainting.safetensors
abyssorangemix2_Hard-inpainting.safetensors
AbyssOrangeMix2_hard_pruned_fp16_with_VAE-inpainting.safetensors
AbyssOrangeMix2_hard_pruned_fp16_with_VAE-inpainting.safetensors
abyssorangemix2AOM2_HARD-inpainting.safetensors
AbyssOrangeMix2_hard_pruned_fp16_with_VAE-inpainting.safetensors






