⚡ Illustrious Workflow as of version 18
🛠️ Purpose & Design Philosophy
This workflow is a high-fidelity environment built for Illustrious XL. It prioritizes stability and professional texture over generation speed. It follows an "all-in-one" philosophy: configure your prompts, hit queue, and let the workflow handle the multi-stage refinement from start to finish.
Not for Speed: This is a heavy-duty refinement tool. If you want 2-second previews, use a basic SDXL workflow.
Personal Use: Built for my specific production needs. It is shared as-is for those who want a "set-and-forget" pipeline for Illustrious.
All-in-One Logic: The workflow handles generation, detailing, and upscaling in one continuous pass.
🚀 Key Features & 2026 Logic
Global Model Patching (RescaleCFG): Includes a pre-configured RescaleCFG patch (Multiplier: 0.7) applied globally. This acts as "HDR Insurance," preventing the "deep-fried" or over-saturated look common in high-CFG Illustrious runs.
Detail Daemon Sampler: Integrated to enhance structural depth. In this version, it is tuned to start at 0.4 to preserve the core Illustrious character proportions while sharpening hair and eye details.
Hybrid Upscale Strategy: * Group Bypass Switch: Easily toggle between a Pixel-only (Lanczos) path for flat anime styles and an Upscale-Model path for 2.5D/highly detailed renders.
Ultimate SD Upscale: Re-draws the upscaled canvas at a 0.35 denoise to lock in fine textures.
Power LoRA Loader: Manage multiple Illustrious-specific LoRAs without messy wiring.
Triple Detailer Groups: 3-stage targeted refinement for faces, hands, and clothing using standard detection models.
CivitAI Meta-Sync: Images are saved with full metadata (Model, LoRAs, Sampler info) for automatic site parsing.
⚠️ Disclaimer & Compatibility
Install at Your Own Risk: Custom nodes can break your environment. I am not responsible for troubleshooting your specific installation.
ComfyUI Portable: Built and tested on the Portable version. Desktop app users may face additional hurdles.
The "Your Version" Factor: Your node versions and environment are 99.9% likely to differ from mine.
Nodes 2.0: I do not recommend using Nodes 2.0. It creates unpredictable UI behavior; I will not provide support for issues involving this feature.
🤝 Support & Boundaries
No DMs: DMs are disabled due to repeat spam. Please check the Discussions tab below; most questions have already been answered.
Modifications: You are free to hack this workflow apart. However, you are responsible for fixing it if it breaks.
Custom Requests: I do not make private workflows. If you need a custom solution, post a Bounty on CivitAI. There are many talented creators ready to help you for a fee.
Description
v8b changes:
Added:
FreeU v2:
This node has been around for a while as part of Comfy Core. I still don’t understand how the settings work, but apparently the default settings are meant for SDXL.
It does definitely affect the image output. If you are interested in what the settings do, then a simple Google search or ask your preferred AI.
I use it with lower CFG settings than normal since enabling it seems to cause cooked images (for me).
It seems to push the image toward an anime output, but that could just be me.
This node is disabled by default.
ControlNet:
Added some basic ControlNet functions to the KSampler and both USDU nodes.
The nodes involved will require comfyui_controlnet_aux and Comfyroll Studio.
The KSampler ControlNet group will have a Load Image node, 3 AIO Aux Preprocessor nodes, 1 CR Multi-ControlNet Stack node, and 1 CR Apply Multi-ControlNet node.
The KSampler ControlNet group is bypassed by default.
There used to be an issue with not having an image in the Load Image node that would stop a workflow from working. AFAIK this has been fixed. If not, the fix is to put any random image there.
Each USDU ControlNet group is the same as the KSampler ControlNet group, but without the Load Image node.
If you don’t want to use ControlNet, you can just delete these groups from the workflow or bypass them.
I am using it for the purpose of using the TTPlanet function built into the AIO Preprocessor in conjunction with ControlNet Union (this can be found in ComfyUI-Manager under Model Manager. Just search for “union” when filtering for ControlNet models. Either non-flux version should work.
Generation speed is still around 3 minutes for me from start to finish, even with ControlNet enabled.
Removed:
Guidance Limiter:
Didn’t feel it was worth keeping. If you liked it, you can just re-add it easily. It’s part of ComfyUI-ppm.