CivArchive
    Illustrious (SDXL) Workflow (T2I/I2I) with NSFW detailing - v1.0
    NSFW
    Preview 125126208

    My workflow for SDXL models. It is a bit messy but functional. Has the following high level features:

    1. Text to Image and Image to Image from multiple sources

    2. I2I can be from random Danbooru posts, image file, image folder, URL, or video with optional autotagging.

    3. Wildcard support

    4. Randomized orientation and image ratio

    5. Prompt list to create a series of images from a file

    6. Batch or single image generation

    7. Multiple detailers with support for detailing of background characters and faces and NSFW areas

    8. Uses a mix of LCM and standard samplers to speed up generation

    Description

    First release

    FAQ

    Comments (4)

    kidfromhellApr 5, 2026· 1 reaction
    CivitAI

    where to find the models for NSFW detailer and logo remover?

    supalazy
    Author
    Apr 5, 2026

    Thanks for the question. I added links to models in the suggested resources.

    Dracken1986Apr 18, 2026
    CivitAI

    Been searching none stop for a I2I workflow and struggling :D, I am trying yours now (thank you btw for putting the time in and creating one).

    I am struggling though to follow your instructions, I am trying to use local images on my computer, Do I reload "Input 2 - load image" and the just keep reloading nodes that are connected to it to make it work? - sorry I am sure its pretty straight forward but I am useless at this ^_^

    supalazy
    Author
    Apr 20, 2026· 1 reaction

    Thanks for reaching out. I would say that this workflow is more complicated than you need if you are looking for a basic I2I workflow. My instructions are also assuming a base understanding of Comfy, so my apologies. This is just a dump of my workflow that I have been tweaking so it is not super organized like others but I find it functional with a lot of flexibility.

    At a basic level, any T2I workflow can be converted to an I2I workflow by replacing the empty latent with the Load Image node and lower the denoise level on the Ksampler node. For the denoise level, the simplistic way to think about it is "How much do I want the output to be different from the input image?" So, for a denoise of 1.00 or 100% change, the output will be completely different from the input no matter what you feed it. That is what you want for a T2I but not a I2I. If you lower the denoise to say 0.9(90%), there will be drastic changes but the output will take hints from the input image. If you drop it to say 0.5 (50%), there will be some changes but the base image will be mostly maintained.

    For this workflow specifically, you will need to turn off the Booru Blank Bypass so you don't send a blank latent. You enable the Booru Image Bypass group. You will likely need to bypass some of the other inputs because they will stop generation without a valid input (URL, Video, Dir path). The other option is to just give them valid inputs. As you stated, for a local image file, Input 2 is correct. Select your image and have the switch on input 2. With the group enabled it will lower the denoise level to 0.5 based on the constant float setting in the group. You can adjust as you need using that constant float node and it fill feed it into the Ksampler node. Another note, make sure you select a fixed aspect ratio and make it a portrait ratio with a width that is reasonable for your setup (832x1216, for example). This will dictate shortest dimension for your first image. (note: if you input a landscape picture the workflow will change the orientation automatically [portrait to landscape] and keep the aspect ratio of the input image scaled based on the shorter dimension). Hopefully this helps and good luck.

    Workflows
    Illustrious

    Details

    Downloads
    546
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/23/2026
    Updated
    5/14/2026
    Deleted
    -

    Files

    illustriousSDXLWorkflow_v10.zip

    Mirrors

    HuggingFace (1 mirrors)