CivArchive
    Preview 123507524
    Preview 123506378
    Preview 123506382
    Preview 123506408
    Preview 123506418
    Preview 123506439
    Preview 123506442
    Preview 123506455
    Preview 123508606
    Preview 123518170
    Preview 123518200
    Preview 123518185
    Preview 123518229
    Preview 123518239
    Preview 123518241
    Preview 123518244
    Preview 123519116

    ⚡ Illustrious Workflow as of version 18

    🛠️ Purpose & Design Philosophy

    This workflow is a high-fidelity environment built for Illustrious XL. It prioritizes stability and professional texture over generation speed. It follows an "all-in-one" philosophy: configure your prompts, hit queue, and let the workflow handle the multi-stage refinement from start to finish.

    • Not for Speed: This is a heavy-duty refinement tool. If you want 2-second previews, use a basic SDXL workflow.

    • Personal Use: Built for my specific production needs. It is shared as-is for those who want a "set-and-forget" pipeline for Illustrious.

    • All-in-One Logic: The workflow handles generation, detailing, and upscaling in one continuous pass.


    🚀 Key Features & 2026 Logic

    • Global Model Patching (RescaleCFG): Includes a pre-configured RescaleCFG patch (Multiplier: 0.7) applied globally. This acts as "HDR Insurance," preventing the "deep-fried" or over-saturated look common in high-CFG Illustrious runs.

    • Detail Daemon Sampler: Integrated to enhance structural depth. In this version, it is tuned to start at 0.4 to preserve the core Illustrious character proportions while sharpening hair and eye details.

    • Hybrid Upscale Strategy: * Group Bypass Switch: Easily toggle between a Pixel-only (Lanczos) path for flat anime styles and an Upscale-Model path for 2.5D/highly detailed renders.

      • Ultimate SD Upscale: Re-draws the upscaled canvas at a 0.35 denoise to lock in fine textures.

    • Power LoRA Loader: Manage multiple Illustrious-specific LoRAs without messy wiring.

    • Triple Detailer Groups: 3-stage targeted refinement for faces, hands, and clothing using standard detection models.

    • CivitAI Meta-Sync: Images are saved with full metadata (Model, LoRAs, Sampler info) for automatic site parsing.


    ⚠️ Disclaimer & Compatibility

    • Install at Your Own Risk: Custom nodes can break your environment. I am not responsible for troubleshooting your specific installation.

    • ComfyUI Portable: Built and tested on the Portable version. Desktop app users may face additional hurdles.

    • The "Your Version" Factor: Your node versions and environment are 99.9% likely to differ from mine.

    • Nodes 2.0: I do not recommend using Nodes 2.0. It creates unpredictable UI behavior; I will not provide support for issues involving this feature.


    🤝 Support & Boundaries

    • No DMs: DMs are disabled due to repeat spam. Please check the Discussions tab below; most questions have already been answered.

    • Modifications: You are free to hack this workflow apart. However, you are responsible for fixing it if it breaks.

    • Custom Requests: I do not make private workflows. If you need a custom solution, post a Bounty on CivitAI. There are many talented creators ready to help you for a fee.

    Description

    v17a-Experimental

    Added the new SMC-CFG node based on CFG-Ctrl: Control-Based Classifier-Free Diffusion Guidance.

    • This is an advanced ComfyUI user BETA node from the sd-perturbed-attention custom nodes

    • If you are smarter than I am, you can read the short research article here or if you’re like me you can hit up Gemini, ChatGPT, etc. for help on how to use the node.

    • My testing with it found that you have to use lower than normal CFG, but the results can be pretty nice with little to no hit to generation speed.

    Replaced the Batch Resize w/Lanczos node with a different version that uses upscale models.

    • Now you can use your preferred upscale model before the image is sent to USDU.

    Added an extra CLIPSeg node to the noise injection group before USDU.

    • These nodes seem to only recognize single prompts. (e.g. "hand, face, leg" will seem to result in one of these being recognized.)

    • They are daisy-chained together, so if you plan on inverting the mask, toggle invert on the 2nd node and it should invert both (theoretically). Otherwise just add a node to invert the combined mask and connect the output to the mask input on the ImageCompositeMasked node.

    Removed the following:

    • Concat Conditioning

    • Normalized Attention Guidance

    • Detail Daemon from USDU

    • Multiply Sigmas (stateless)

    Custom Nodes used in the version are from:

    comfyui_controlnet_aux

    ComfyUI Impact Pack

    ComfyUI-Custom-Scripts

    rgthree-comfy

    ComfyUI-KJNodes

    ComfyUI_UltimateSDUpscale

    ComfyUI Inspire Pack

    ComfyUI Impact Subpack

    sd-perturbed-attention

    WAS Node Suite (Revised)

    ComfyMath

    ComfyUI Image Saver

    JPS Custom Nodes for ComfyUI

    ComfyUI-JNodes

    WhiteRabbit

    GPS' Supplements for ComfyUI

    I might have missed some, but this should be most of them!

    As previously mentioned: 

    The Laplace Scheduler is not recommended for people new to ComfyUI and/or are not comfortable with figuring out settings on your own.

    • If you do want to try it out, I suggest not messing with the settings on the LaplaceScheduler node other than "steps".

    • It's very sensitive to what you prompt in both positive and negative prompts, so prompt with caution (or go wild).

    The Basic Scheduler node (which is the default) works like the same normal KSampler settings, but some people have ran into random compatibility issues with it before. It's a Comfy Core node, so it has to be something on their installs causing the conflict.

    • On the Basic Scheduler node: you control the Scheduler, Steps, and Denoise.

    SamplerCustomAdvanced is now the only KSampler for the initial image on this workflow

    • Confirmed it works even with RES4LYF installed

    • This node is not affected by the Global Sampler node, so the sampler and scheduler have to bet set manually here.

    • The CFG setting is also separate here since I tend to use sa_solver with Laplace but other combos for the rest of the workflow.

    The Global Seed and Global Sampler nodes will override any of the settings related to them across the workflow. 

    • Using a sampler or scheduler not compatible with any of the nodes will result in an error.

    • You can bypass or delete these two nodes if you want to do these settings manually on each node.

    • Does not affect the sampler or scheduler settings connected to SamplerCustomAdvanced (if it is in the workflow).

    • After the most recent update, sometimes even after setting the seed to fixed, it doesn't always pick up where you left off. This may be due to comfy unloading something. Just a guess.

    CFG is tied to the “CFG” node to the left of the KSampler.

    • Delete this node if you want to set CFG manually on each node.

    The detailers can be used for whatever you want as long as you have the detection models.

    • The detailer nodes used in this workflow take all detected items and do them at the same time.

    • You can use whatever detection model you want. You don't have to use what I use.

    If you are seeing busted results in the detailers try adding something to the wildcard spec field to help guide the detailer.

    Example: only one eye is detected and the detailer is turning into nightmare fuel

    • set the Global Seed to Fixed

    • try adding "eye" (without quotes) to the wildcard spec field

    • click Run

    If you’re using the Nodes 2.0 feature: you are on your own.

    Workflows
    Illustrious

    Details

    Downloads
    254
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/10/2026
    Updated
    3/16/2026
    Deleted
    -

    Files

    notSoSimpleOrIsIt_v17aExperimental.zip