CivArchive
    AbyssOrangeMix2 - Hardcore - AbyssOrangeMix2_hard-inpainting
    NSFW
    Preview 92443
    Preview 238719
    Preview 79450
    Preview 79321
    Preview 79319
    Preview 79322
    Preview 80907

    AbyssOrangeMix2 (AOM2) HuggingFace Model Card
    Credit for this model goes to the original author(s) and the maintainer on HuggingFace, WarriorMama777. Also, check out their profile on Civitai!

    Compatible ControlNets available for this model at Difference ControlNets.
    For less hardcore NSFW, see NSFW. For SFW, see SFW. For more of an anime style, check out BloodOrangeMix - Hardcore. For more of an illustrated/painted style, see EerieOrangeMix2 - Hardcore.

    Additional Request

    • Please clearly indicate where modifications have been made if used as a basis for any further work (i.e training, merging/mixing, or extraction), and if used for merging/mixing, please state what steps (i.e. recipe) you took to do so.


    See https://huggingface.co/WarriorMama777/OrangeMixs#abyssorangemix2-aom2 for a detailed explanation of the model, recipes, and tips on how to best use it.

    Additional notes - Using the SD MSE VAE (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) and the Obsidian Skin LoCon (LyCORIS), available at https://civarchive.com/models/18584/obsidian-skin for the dark skinned preview image.

    Description

    This is an inpainting version of the model and is only meant for use with img2img and inpainting. This should NOT be used with txt2img and likely any attempt to will fail with an error.

    Orangemix VAE is baked in.

    Additional notes - Using the SD MSE VAE (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) and the Obsidian Skin LoCon (LyCORIS), available at https://civitai.com/models/18584/obsidian-skin for the dark skinned preview image.

    FAQ

    Comments (94)

    FefaAIartFeb 7, 2023
    CivitAI

    Hello! I downloaded the pruned version. is the VAE file need to be on models SD folder or the VAE folder?

    Havoc
    Author
    Feb 7, 2023· 1 reaction

    Hello @FefaAI ,

    The VAE should go into \models\VAE, then you can select it via the UI using Settings -> Stable Diffusion -> SD VAE.

    FefaAIartFeb 7, 2023· 1 reaction

    @Havoc Thank you! One more question... to turn VAE selection as "Automatic" in SD web ui, I need to rename as the same name of checkpoint merge?

    Havoc
    Author
    Feb 7, 2023

    Hello @FefaAI ,

    That is one way, yes, in the case of the pruned versions here, I have baked the VAE into them, so setting it to automatic should use the orangemix VAE in the model automatically. If the colors look faded, that would indicate that the orangemix VAE likely isn't in use and you may need to tweak the setting to specifically use the VAE, or use the rename after the model name approach.

    CoCatgirlFeb 7, 2023
    CivitAI

    does this use danbooru tags?

    Havoc
    Author
    Feb 7, 2023

    Hello @whywhynot ,

    Yes, this model leverages danbooru tags.

    CoCatgirlFeb 7, 2023

    @Havoc And what happens if I enable a different VAE even though one is baked in?

    Havoc
    Author
    Feb 7, 2023

    Hello @whywhynot ,

    It will use the VAE you select, it will not use both or combine them. 

    CoCatgirlFeb 7, 2023

    @Havoc I'm noticing unless I use a different VAE, the colors look very washed out, bland

    Havoc
    Author
    Feb 7, 2023

    @whywhynot the pruned models have the VAE baked in, the unpruned ones do not. It will only use the baked in VAE if set to automatic or none (none just doesn't try to find a VAE matching the models name). I will look at uploading unpruned ones with VAE embedded if there is a desire for them. If you are facing issues even with the pruned ones, then manually selecting the VAE should work fine as well.

    Also, do be aware that the other main VAEs, i.e. SD's MSE or WD 1.4 (or their dozens of renamed instances), tend to greatly over saturate colors by comparison to the orangemix VAE. If you prefer that, then you would need to manually use one of those, otherwise using (monochrome:1.1) and (greyscale) in the negative prompt will improve the colors as well.

    CoCatgirlFeb 7, 2023· 1 reaction

    @Havoc OK, thanks for the tips

    laughterthoughtsFeb 9, 2023· 1 reaction
    CivitAI

    When I use the model every image I make comes out de-saturated

    Havoc
    Author
    Feb 9, 2023

    @laughterthoughts did you download a pruned version? Those have the VAE baked in, if you are using the unpruned version, you need to download the VAE and select it manually. In addition, use (monochrome:1.1) and (grayscale) in your negative prompts for better color.

    laughterthoughtsFeb 9, 2023

    @Havoc i downloaded the 5gb version

    Havoc
    Author
    Feb 9, 2023

    @laughterthoughts ah, yeah, that one requires you to manually select the VAE. The pruned versions have the same quality level and include the VAE, so you can leave it on automatic for those.

    laughterthoughtsFeb 9, 2023

    @Havoc Where can I download the VAE

    Havoc
    Author
    Feb 9, 2023

    @laughterthoughts if you click the dropdown next to download latest, it is available there. Here is a direct link - https://civitai.com/api/download/models/5038?type=VAE&format=Other

    laughterthoughtsFeb 9, 2023

    @Havoc do i just implement it by putting it in vae folder inside models

    Havoc
    Author
    Feb 9, 2023· 1 reaction

    @laughterthoughts place it into \models\VAE and select it in the UI via settings -> stable diffusion -> SD VAE.

    kurokuroFeb 9, 2023· 1 reaction
    CivitAI

    Can't seem to remove blushing / red cheeks no matter how extreme I make the negative prompts. Any ideas?

    Havoc
    Author
    Feb 9, 2023

    Hey @kurokuro ,

    Negating blush seems to work for me, might need to emphasize it more, such as (blush:1.1) or even (blush:1.2). It won't 100% remove it, likely the model is over fitted on that trait, then you could easily remove it entirely via post processing if you really want it all gone. Certain skin tones might also not have as pronounced. In addition, if using an VAE like WD 1.4 or SD MSE, those will over saturate everything, making the blush more visible.

    kurokuroFeb 10, 2023

    @Havoc Hey, thanks for the reply. I've tried up to blush:1.8, plus additional negatives for pink cheeks, red cheeks, blushing, and so on. I'm using the pruned model with the integrated orange VAE here. Can you think of anything else I can try? This ends up being an important attribute for the stuff I'm working on; I super dig this model but it makes it a bit hard to use as a result.

    kurokuroFeb 10, 2023

    @Havoc As an extra footnote, my daily driver is the bloodorange mix from WarriorMama777's HF card, and I haven't noticed the same issue happening there (blush is easy to adjust).

    Havoc
    Author
    Feb 10, 2023· 1 reaction

    @kurokuro I'll try digging into more when I get a chance to see if there is something we are missing that might address it. The models and mix recipes involved are different from BloodOrange, so it's behavior being different makes sense.

    kurokuroFeb 10, 2023

    @Havoc Dope, thank you. Super appreciate it.

    Havoc
    Author
    Mar 2, 2023· 1 reaction

    @kurokuro apologies about the delay, but WarriorMama777 did make some recommendations on tags to try negative prompting to remove blush. These were for AOM3, but I'd imagine they should work here as well.

    (blush, embarrassed, nose blush, light blush, full-face blush:1.4),

    kurokuroMar 3, 2023

    @Havoc Interesting! Thanks for following up here, I'll give those a shot.

    taiyakissFeb 9, 2023· 2 reactions
    CivitAI

    this model is gorgeous, but im curious how to get my photos to have a higher saturation and a less oil paint type of look.

    Havoc
    Author
    Feb 9, 2023

    Hello @taiyakiss ,

    Saturation
    First, make sure you are using the correct VAE, either the orangemix one manually, or by using one of the pruned models which has it baked in (so setting SD VAE to automatic will use it).

    Next, I'd recommend negative prompting (monochrome:1.1), (greyscale) to improve colors.

    Finally, if that still isn't enough, I recommend using the SD MSE OR WD 1.4 VAE, which will (in my opinion overly do so) saturate the colors.

    SD MSE - https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors

    WD 1.4 - https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt

    Style
    Positive prompting realistic can greatly enhance the realism and 3d appearance of images. photorealistic and 3d are also tags to try. The amount of empahsis depends on the prompt and settings, so experiment with higher or lower amounts of it e.x. (realistic:0.8) or (realistic:1.2).

    Hope this helps!

    NiConfuciusFeb 10, 2023
    CivitAI

    Please tell me, I downloaded the VAE after the selection of automatic discovery or good dark, you must manually select, this problem can be solved?

    Havoc
    Author
    Feb 10, 2023

    @NiConfucius

    These instructions assume you are using Automatic1111, I am unable to guide for other UIs.
    1. Download https://civitai.com/api/download/models/5038?type=VAE&format=Other
    2. Place into ..\stable-diffusion-webui\models\vae
    3. Select in the UI via Settings -> Stable Diffusion -> SD VAE

    169135Feb 10, 2023· 1 reaction
    CivitAI

    Hi ! I try to reproduce one of the picture you made for this model (the blue haired girl), by copying the generation data, but I ended with a slightly different result, with a lot more fingers on her hand. Any idea were it might come from?

    169135Feb 10, 2023

    I fell like the problem comes from the upscaling process.

    Havoc
    Author
    Feb 10, 2023· 3 reactions

    Hello @Drai ,

    So there might be a couple of factors at play here. Assuming you are using the baked in VAE or the orangemix VAE seperately (as fine details can be affected by the VAE). This is also assuming you use are using the Automatic1111 UI.

    1. I am using xformers (and perhaps you are too), which introduces a small amount of non-determinism, i.e. pictures can vary a small amount given the same seed, prompt, and other generation parameters.

    2. I am using a setting in the UI that might slightly change results, you can find it here in the UI:
    Settings -> Stable Diffusion -> "Enable quantization in K samplers for sharper and cleaner results. This may change existing seeds. Requires restart to apply."

    Outside of those, what you can try is using variation seeds to introduce small amounts of change on a given primary seed. In the Automatic1111 UI, this is enabled via the Extra checkbox, then you just allow the variation seed to be random (-1) and set it to have a small amount of strength (0 being no change, 1 being completely changed, so starting at 0.05 or less might be effective).

    All upscaling was done via hires fix using latent (nearest) or latent (nearest-exact). None of my examples leverage any img2img, inpainting, or post processing.

    169135Feb 10, 2023· 1 reaction

    @Havoc Thanks a lot for answering so fast ! I will look into it, it's a gold mine for a beginner like me !

    DenisXxxFeb 11, 2023
    CivitAI

    The model NSFW is better to make "SFW" image than hardcore? Someone had the experience?

    Havoc
    Author
    Feb 11, 2023

    Hello @DenisKeni ,

    Yes, NSFW would typically be better for SFW, as hardcore will favor more sexual images.

    Quality may be better with NSFW for SFW images as well, as hardcore introduces a model which, while better for hardcore and hentai, may not be as good for cases where you do not want that.

    bententhhousandFeb 14, 2023
    CivitAI

    Hello, I installed this model alongside the orangemix.vae.pt and tried to replicate the prompts provided in the gallery, however I am getting mixed results. I'm not sure how to embed photos in comments, but my text output is as follows:

    absurdres, 1girl, blue hair, sports bra, fellatio, (penis:0.9), Abyss

    Negative prompt: (worst quality:1.2), (low quality:1.2), (lowres:1.1), (monochrome:1.1), (greyscale), multiple views, comic, sketch, animal ears, pointy ears,

    Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 174470057, Size: 512x512, Model hash: e714ee20aa, Model: abyssorangemix2_Hard

    https://ibb.co/ynrnXdZ <-- image link

    Is there something I am missing to bring my model up to speed? I only have the orangemix and abyssorangemix2 files in my stable-diffusion folder.

    Havoc
    Author
    Feb 14, 2023

    Hello @bententhhousand ,

    To replicate, you actually want to click the copy button and paste all of it into the positive prompt in Automatic1111 (replacing all of the content there if there is any), then click the little blue button with a white arrow in it that is to the right of the positive prompt input, or under the generate button.

    I have also pasted the full generation parameters below, hope this helps!

    absurdres, 1girl, blue hair, sports bra, fellatio, (penis:0.9)

    Negative prompt: (worst quality:1.2), (low quality:1.2), (lowres:1.1), (monochrome:1.1), (greyscale), multiple views, comic, sketch, animal ears, pointy ears,

    ENSD: 31337, Size: 512x512, Seed: 174470057, Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Clip skip: 2, Model hash: cadf2c6654, Hires upscale: 2, Hires upscaler: Latent (nearest), Denoising strength: 0.6

    The upscaling is likely the critical piece as to why the results are so different, but I also did not use Abyss in my positive.

    taitaikiFeb 18, 2023· 2 reactions
    CivitAI

    I also generated some interesting pics from this model! Awesome!

    EvergaleFeb 19, 2023· 1 reaction
    CivitAI

    Im not getting quite the right results when trying to reproduce the images with the attached info. Not sure why, any help troubleshooting would be appreciated

    Havoc
    Author
    Feb 19, 2023

    Hey @Evergale ,

    The best way to replicate an image, assuming you are using the automatic1111 UI, is to click "Copy Generation Data" and paste it into the positive prompt (overwriting anything currently there) in the automatic1111 UI, THEN click the blue button with an arrow to the right of the positive prompt (or under the generate button), this will automatically map all the generation parameters for you.

    EvergaleFeb 23, 2023

    @Havoc Is there a way to test for a specific problem in the setup process? As I understand it, if I am using the same model, vae, and generation data, it should produce the exact same image, correct?

    msiaigensFeb 20, 2023· 2 reactions
    CivitAI

    Hello, im having an absolute blast with this model, im truly thankful that you put it together, just have a quick question:

    when im doing img2img, the images get washed out colors, specially at lower denoising strength values, so when i want to do very slight modifications, the colors get completely washed out, also happens with inpainting, should note that when the image is processing i can see the colors are vibrant, but when it ends they get washed.

    any advice would be appreciated (also happens with aom3)

    Havoc
    Author
    Feb 27, 2023

    Hey @msiaigens ,

    So, make sure you are carrying over the negative prompts using txt2img for when you do img2img as far as the quality, resolution, and color related ones. The other thing to try is the setting in the UI called "Apply color correction to img2img results to match original colors." I'm not sure the exact menu this is under, but you should be able to find it somewhere under the settings tab in the UI.

    DaddyJuanFeb 20, 2023· 1 reaction
    CivitAI

    I am having problems reproducing your images, but only the ones that are not 512x512, this is really weird. I used the copy and paste and the blue arrow in automatic1111 as you stated multiple times. Do you have a clue what i should check?

    dilectiogamesFeb 20, 2023

    for me it doesn't work even for the 512 images

    Havoc
    Author
    Feb 20, 2023

    @Juan_Hernandez @dilectiogames

    Make sure you are using the correct Clip Skip and Eta noise seed deltas, as sometimes those seem like they aren't being mapped properly when mapping the generation parameters. These examples expect you to set Clip Skip (aka Stop At last layers of CLIP model) to 2 and Eta noise seed delta (ENSD) to 31337, this is typical of most anime models. These settings can be found under the settings tab in the UI. At this time I am unsure of where, as I have been using them via the quick settings for so long, I have forgotten, will reply again once I find where they would be. Other than that, make sure you are using the right model, that all of the generation parameters are the same, and that no extensions are interfering and changing anything.

    evelene02192Feb 23, 2023· 6 reactions
    CivitAI

    for some reason I find it hard to trigger nsfw actions like (cowgirl)?. I have VAE installed too. any ideas?

    Havoc
    Author
    Feb 23, 2023· 5 reactions

    Hello @evelene02192 ,

    The VAE would only effect fine details and color. The key thing is that you need to use booru tags in your prompts, as if you don't use the full tag, it won't understand it correctly, or at all. For cowgirl, for example, you'd want to try including cowgirl position, straddling, sex, vaginal in your positive, and negating other positions by their tag names in the negative if you see conflicting positions occurring. Great (NSFW) reference resource for them, https://danbooru.donmai.us/wiki_pages/tag_groups.

    evelene02192Feb 23, 2023· 1 reaction

    @Havoc thanks alot for the reply!

    CreatorExFeb 23, 2023· 1 reaction
    CivitAI

    The resulting output using 2:3 dimensions is of low quality. It is more likely to result in two-person arms connected without hands.

    Havoc
    Author
    Feb 23, 2023

    Hello @CreatorEx ,

    The 2:3, e.x. 512x768, results are not being cherry picked with this model. If you are trying to directly generate at resolutions exceeding 512x512 too much, you will get incoherence, distortions, and abnormalities. The solution is to set a lower resolution for the first pass, then use hi resolution fix to upscale the image. Almost all models are trained on data 512x512 or smaller, as such they cannot effectively handle higher resolutions directly.

    I can generate hundreds of images with the following settings, and not see the issues you are facing.

    absurdres, 1girl
    Negative prompt: (worst quality:1.2), (low quality:1.2), (lowres:1.1), (depth of field, bokeh, blurry:1.1),(motion lines, motion blur:1.1), (greyscale, monochrome:1.0),
    Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: -1, Size: 512x768, Model hash: e892703c61, Clip skip: 2, ENSD: 31337


    It is best to copy generation parameters from examples for replication (i.e. copy them from the images that have the little info icon, clicking copy generation parameters button, and pasting that into your positive prompt, overwriting anything there, using the little blue button to the right to automatically map them), simply copying the prompts and then using your own configuration will lead to unexpected results.

    CreatorExFeb 24, 2023

    @Havoc This is an example https://we.tl/t-0UXcgcQNuN, part of the reason should be bad_prompt, but I think there are other reasons. I tried reducing the resolution by half and upscaling by x2, still not ideal.

    Havoc
    Author
    Feb 25, 2023

    @CreatorEx I see what you mean, there is a better tag for framing that will capture more of their body, cowboy shot, reference (NSFW) https://danbooru.donmai.us/wiki_pages/cowboy_shot.

    Try a prompt like this

    absurdres, cowboy shot, (animal ears:0.1), large breasts
    Negative prompt: (worst quality, low quality:1.3), (lowres), (depth of field, bokeh, blurry:1.3),(motion lines, motion blur:1.1), (greyscale, monochrome:1.0)
    Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2650797226, Size: 512x768, Model hash: cadf2c6654

    merpdaderp970Feb 27, 2023
    CivitAI

    can one do particular art styles (i.e. style from a particular anime studio or show)?

    1047301Apr 8, 2023

    Look for a lora! I'm sure you have seen people producing various images with this checkpoint and other checkpoints, right? You can find example loras often when you look at the things people make and they care enough to share how they generated the image.

    How to use a lora: <lora:filename:multiplier>

    Make sure you have the <> and do not use any spaces.

    For a lora for an anime style tarot card, you'd do this: in your positive prompts area. <lora:animetarotV51:1>

    Here's that lora btw: https://civitai.com/images/322490?modelVersionId=28609&prioritizedUserIds=275939&period=AllTime&sort=Most+Reactions&limit=20 it's really pretty.

    You can layer loras up to a point to mix styles together as well as poses. You can give higher weights to one and less to another to choose how much one influences another. it's pretty great. I'm still learning but I imagine this is how people make way more complex positions + facial expressions + poses.

    zichenMar 8, 2023· 3 reactions
    CivitAI

    how to fix the eyes?

    SD_AI_2025Mar 24, 2023

    how to fix anything : inpaint.

    Havoc
    Author
    Mar 24, 2023· 2 reactions

    Hello @zichen , would need more detail. Using the right VAE can help, OrangeMix / NAI for more typical anime style, and the Stable Diffusion MSE one for more realistic images. In addition, I do not use or recommend using face restoration at all, it typically mangles faces. Upscaling images with sufficient d noising can also resolve issues with eyes.

    animescientists640Mar 9, 2023
    CivitAI

    where do I search for seeds, and how do I make characters exactly or at least similar to what they are? For example, I want to make Shiroko from Blue Archive, but the results are always different from what I really want, even though they're good

    felipe781Mar 30, 2023

    you should find loras for the character you want, right on this site, download them on the lora folder and you will get that character. A harder choice would be train your own Lora

    armorwei718Mar 30, 2023

    you may google or search in YouTube for some Tutorials from Stable Diffusion. Like how to make own cosplayer. Normally, you need Lora module for the character you want.

    ritcher1Mar 10, 2023
    CivitAI

    Just a confirmation: the AOM2-hard pruned here, is a no-baked-vae, since the VAE is a separate download ? In HF, they are all baked-vae:

    https://huggingface.co/WarriorMama777/OrangeMixs/tree/main/Models/AbyssOrangeMix2/Pruned

    anonyvpn001962Mar 14, 2023
    CivitAI

    why many peopel use SD1.5 instead of 2.0?

    zhoufeng123Mar 14, 2023

    2.0 已经删除色情元素,

    Havoc
    Author
    Mar 14, 2023

    SD 2.X isn't really an upgrade for end users of models for image generation, it has some improvements, but nothing significant enough to justify it's use without some amazing new models being released. SD 1.5 ecosystem also has far more content and ongoing creation of new content than SD 2.X does.

    AnandaApr 2, 2023

    1.5 is much better for character models. 2.x is better for landscapes etc.

    79468Apr 15, 2023

    2.0 cut out a lot of copyrighted content. Therefore, 2.0 creature draws non-photographic content worse.

    shaozhiMar 28, 2023
    CivitAI

    手的问题还是很大。。不知道怎么改善

    GoldenMiocolaApr 25, 2023

    Depth插件自己后期用局部重绘搞一下咯

    hnyl4235966Mar 30, 2023
    CivitAI

    用不了。。。

    hnyl4235966Mar 30, 2023
    CivitAI

    是我显卡带不动吗

    mybatisApr 25, 2023

    what gpu that do you use

    PolkovnikuNePishutApr 12, 2023
    CivitAI

    Doesn't work with the upscaler?

    esquciApr 17, 2023· 5 reactions
    CivitAI

    can it draw boys?

    EvilFearMay 6, 2023

    good luck,

    letmeinlolMay 11, 2023· 10 reactions

    jesus christ please tell me you mean men

    HuskarMay 17, 2023

    It can, if you try enough... but beware of inpainting! 😁

    00812138Apr 18, 2023
    CivitAI

    案例

    q0966067766888May 15, 2023· 7 reactions
    CivitAI

    Gustav Klimt

    349840381515May 19, 2023· 10 reactions
    CivitAI

    Whats the difference with amo2nsfw

    gocuzeroJun 8, 2023· 4 reactions
    CivitAI

    iget this message NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

    zhzhzhz889Jun 18, 2023

    Did you download VAE?
    Did you try running it with suggested flags?

    fadedninnaJul 9, 2023

    @zhzhzhz889 nah he just got a weak ass gpu

    gocuzeroJul 9, 2023

    @fadedninna 

    It was solved for me in the following way:
    1- Edit with Notepad++ or notepad the webui-user.bat file
    2- edit, find "set COMMANDLINE_ARGS="--no-half-vae"
    3- save all

    gocuzeroJul 9, 2023· 2 reactions

    @zhzhzhz889 

    It was solved for me in the following way:
    1- Edit with Notepad++ or notepad the webui-user.bat file
    2- edit, find "set COMMANDLINE_ARGS="--no-half-vae"
    3- save all

    LicasJan 9, 2024

    if anyone strugling with this error, what solved this issue for me was putting these arguments: --lowvram --no-half

    Shadow_of_Kai_GainesAug 18, 2023
    CivitAI

    I appreciate the valiant effort below of showing what nsfw capabilities this model has, while striving to stay within civitai's sfw rules lol

    AphaitasAug 28, 2023· 2 reactions
    CivitAI

    Wow. With the right settings, this is a game changer.

    possom2009Oct 3, 2023· 18 reactions
    CivitAI

    Where is the VAE file? Its no longer downloadable.

    shirakottekuNov 6, 2023
    CivitAI

    Is there a way to reduce lighting effects?

    NourdalApr 24, 2024· 11 reactions
    CivitAI

    Despite the fact that it was once a really good model and enjoyed well-deserved popularity, now it is outdated, it lacks neither flexibility nor diversity. However, in some places it still looks good.

    yuri3987Feb 8, 2026· 1 reaction
    CivitAI

    This model is Great expressiveness. ALL model Still number one the current!!! AOM2hard XL Ver or Illustrious Ver plz <<(*_ _)>>