CivArchive
    Succupon All in One Anima Workflow T2I - I2I - Detailing - Upscale - v1.0 Anima
    NSFW
    Preview 130436364
    Preview 130436014
    Preview 130435993
    Preview 130436326
    Preview 130436047
    Preview 130436060
    Preview 130436122
    Preview 130436155
    Preview 130436286
    Preview 130436453
    Preview 130436512
    Preview 130436545
    Preview 130436593
    Preview 130436606
    Preview 130436608
    Preview 130436628
    Preview 130436645
    Preview 130436701

    Please take a look at the following article for a full guide on how to use this workflow effectively:

    https://civarchive.com/articles/29870


    Changelog:

    v1.0 – initial release


    Below you will find a quick overview, but I encourage you to take a look at the full guide.

    So what makes this workflow special? It is specifically how your detailers work.

    A common annoyance I have is if you are generating large batches of images with a generic prompt for your detailers, they will tend to “normalize” the area they are detailing. Example: An image of a character with an angry expression gets sent to a face detailer with “face, beautiful face,” it will slowly morph the face into a neutral of smiling expression with multiple passes.

    To combat this problem, this workflow will feed your face and eye detailer prompts into your body prompt, and all three will feed into your base prompt. Here is a quick visualization of what is happening.

    Essentially, you input your detailer text and wildcards on one central location. Eye detailer text will feed into your face detailer, your face detailer will feed into your body detailer. And all detailers will feed into your global prompt.
    Here is an example base prompt. This is Anima, so you can use natural language in your prompt.


    And now I input prompts for the detailers


    You will notice I included emotions of the characters in the face detailer. This will get fed into the base prompt but also ensure that the face detailer knows about the expressions.

    I also included the names of the characters in the eye detailer. This may seem odd at first but bear with me. I did this because the eye detailer is fed into both the face and body detailer. This ensures every detailer knows what character it is drawling, all the way down to their eyes. (Refer to the diagram at the top)

    Note: At the bottom, there is a preview box so you can see what your prompt looks like when it gets sent for initial image generation. This is helpful when you are first starting to use this workflow.

    IMG2IMG

    Enable the IMG2IMG mode

    Load an image and specify a Denoise level. The closer the Denoise is to 1, the more the image will change.

    The image will follow the same detailer pipeline and logic as the TXT2IMG one. Detailer prompts will be sent to your initial image prompt, and face/eye detailers will be sent to your body prompt. The results are very consistent and give high prompt adherence.

    For this image, I included things like “slit pupils, glowing eyes” in the eye detailer, “evil smile, mischievous, fangs” in the face detailer, and “large breasts” in the body detailer.

    Outside of that, I just described the scene through natural language:
    night, darkness, candlelight, shadows, moon, dim lighting,

    succubus, devil wings, devil tail, horns, succubus outfit, revealing clothes,

    Three adult women are sitting on a couch in a living room with a fireplace together. They all have different eye and hair colors. The girl in the center is taller than the other two and her mouth is closed. The two smaller girls are on either side of the taller girl. The two girls have their mouths open revealing fangs.

    The denoise was set pretty high since I wanted the image to change quite a bit. You should generally start lower and work your way up (Around 0.5 or 0.6 perhaps)

    It’s important to realize that Anima is not an “edit” model. You shouldn’t try to edit an image like Qwen edit. This is simply IMG2IMG.

    Custom Nodes

    You should be able to install everything that is needed through the ComfyUI manager. But just in case, I will leave a list of nodes:

    https://github.com/Comfy-Org/Nvidia_RTX_Nodes_ComfyUI
    https://github.com/ltdrdata/ComfyUI-Impact-Pack
    https://github.com/pythongosssss/ComfyUI-Custom-Scripts
    https://github.com/rgthree/rgthree-comfy
    https://github.com/yolain/ComfyUI-Easy-Use
    https://github.com/kijai/ComfyUI-KJNodes
    https://github.com/ssitu/ComfyUI_UltimateSDUpscale
    https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes
    https://github.com/sipherxyz/comfyui-art-venture
    https://github.com/ltdrdata/ComfyUI-Impact-Subpack
    https://github.com/ltdrdata/was-node-suite-comfyui
    https://github.com/evanspearman/ComfyMath
    https://github.com/alexopus/ComfyUI-Image-Saver
    https://github.com/Miosp/ComfyUI-FBCNN
    https://github.com/Goshe-nite/comfyui-gps-supplements
    https://github.com/pamparamm/ComfyUI-ppm

    Checkpoints: Use any Anima model you want!

    CLIP: https://huggingface.co/circlestone-labs/Anima/blob/main/split_files/text_encoders/qwen_3_06b_base.safetensors

    VAE: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/vae/qwen_image_vae.safetensors

    Detailer models

    Upscale: https://huggingface.co/Kim2091/2x-AnimeSharpV4/blob/1a9339b5c308ab3990f6233be2c1169a75772878/2x-AnimeSharpV4_Fast_RCAN_PU.safetensors

    Hand: https://huggingface.co/Bingsu/adetailer/blob/main/hand_yolov8s.pt

    Face: https://huggingface.co/datasets/Gourieff/ReActor/blob/main/models/detection/bbox/face_yolov8m.pt

    Eyes: https://huggingface.co/Tenofas/ComfyUI/blob/d79945fb5c16e8aef8a1eb3ba1788d72152c6d96/ultralytics/bbox/Eyeful_v2-Paired.pt

    Body: https://civarchive.com/models/2201851/personfemaledetection

    NSFW Detailer: https://civarchive.com/models/1313556/anime-nsfw-detectionadetailer-all-in-one


    Extra notes for Anima: Take a look at their Huggingface repo. They have plenty of guidance for using their model. https://huggingface.co/circlestone-labs/Anima

    Also, If you want to increase the generation speed, feel free to disable detailers. Anima tends to produce better base images than SDXL, Pony, and Illustrious. There are many times where detailing an image is simply not needed. Perhaps try only leaving the Refine and SDSU Upscale steps enabled, with all other detailers disabled.

    For all models used in this workflow, as well as in-depth examples on regional prompting, check out the guide article:
    https://civarchive.com/articles/29870

    Description

    Initial Anima release

    Workflows
    Anima

    Details

    Downloads
    39
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/12/2026
    Updated
    5/13/2026
    Deleted
    -

    Files

    succuponAllInOneAnimaWorkflow_v10Anima.zip