⚡ Illustrious Workflow as of version 18
🛠️ Purpose & Design Philosophy
This workflow is a high-fidelity environment built for Illustrious XL. It prioritizes stability and professional texture over generation speed. It follows an "all-in-one" philosophy: configure your prompts, hit queue, and let the workflow handle the multi-stage refinement from start to finish.
Not for Speed: This is a heavy-duty refinement tool. If you want 2-second previews, use a basic SDXL workflow.
Personal Use: Built for my specific production needs. It is shared as-is for those who want a "set-and-forget" pipeline for Illustrious.
All-in-One Logic: The workflow handles generation, detailing, and upscaling in one continuous pass.
🚀 Key Features & 2026 Logic
Global Model Patching (RescaleCFG): Includes a pre-configured RescaleCFG patch (Multiplier: 0.7) applied globally. This acts as "HDR Insurance," preventing the "deep-fried" or over-saturated look common in high-CFG Illustrious runs.
Detail Daemon Sampler: Integrated to enhance structural depth. In this version, it is tuned to start at 0.4 to preserve the core Illustrious character proportions while sharpening hair and eye details.
Hybrid Upscale Strategy: * Group Bypass Switch: Easily toggle between a Pixel-only (Lanczos) path for flat anime styles and an Upscale-Model path for 2.5D/highly detailed renders.
Ultimate SD Upscale: Re-draws the upscaled canvas at a 0.35 denoise to lock in fine textures.
Power LoRA Loader: Manage multiple Illustrious-specific LoRAs without messy wiring.
Triple Detailer Groups: 3-stage targeted refinement for faces, hands, and clothing using standard detection models.
CivitAI Meta-Sync: Images are saved with full metadata (Model, LoRAs, Sampler info) for automatic site parsing.
⚠️ Disclaimer & Compatibility
Install at Your Own Risk: Custom nodes can break your environment. I am not responsible for troubleshooting your specific installation.
ComfyUI Portable: Built and tested on the Portable version. Desktop app users may face additional hurdles.
The "Your Version" Factor: Your node versions and environment are 99.9% likely to differ from mine.
Nodes 2.0: I do not recommend using Nodes 2.0. It creates unpredictable UI behavior; I will not provide support for issues involving this feature.
🤝 Support & Boundaries
No DMs: DMs are disabled due to repeat spam. Please check the Discussions tab below; most questions have already been answered.
Modifications: You are free to hack this workflow apart. However, you are responsible for fixing it if it breaks.
Custom Requests: I do not make private workflows. If you need a custom solution, post a Bounty on CivitAI. There are many talented creators ready to help you for a fee.
Description
v8 changes:
Note: newer version does not mean it’s better, it’s just what I am using/experimenting with currently.
Testing performed using Distance sampler (n & p versions) and a variety of schedulers on Better Days (Illustrious based merge) on ComfyUI v0.3.31 and Comfy Frontend v1.18.6
Disclaimer: If you are using a different version than of any/all of those listed above, then this workflow may not work for you. I can't account for every difference, since we are all potentially using different versions of something.
Adjustments were made to work with the experimental Distance sampler.
TL;DR for this sampler:
“A custom experimental sampler based on relative distances. The first few steps are slower and then the sampler accelerates (the end is made with Heun). The idea is to get a more precise start since this is when most of the work is being done.”
Uses a low amount of steps (4 to 10) and is recommended by the author to use 7 steps with AYS or Beta schedulers. (You can always try other schedulers too. YMMV.)
A complete explanation of this sampler can be found on the project page.
Note: this particular sampler does not seem to work with v-pred models (at least not on Lobotomized Mix).
Installing the Distance sampler also adds a couple cfg++ samplers that I have not tested.
Image generation from start to finish on a 5060ti 16GB takes roughly 3 minutes on the settings I used for the sample images.
Settings will need to be adjusted to fit your preferences (as always).
Using a different sampler/scheduler combo and switching USDU seams_fix_mode to “None” can speed up the process greatly.
Added:
Mahiro - “to make CFG less dumb”. As quoted here.
Guidance Limiter from the ComfyUI-ppm custom nodes which is an implementation of this.
As far as settings for this go. I am just leaving them at the default. The project page does not appear to have any related instructions for the two settings.
Boolean switches above the KSampler and USDU nodes for toggling Detail Daemon on and off.
These are toggled to “true” by default.
Removed:
2nd KSampler
This was not beneficial enough for me to keep in the workflow.
It seemed to make the image worse in most cases.
Perturbed Attention Guidance has been removed.
It was not beneficial enough for me to keep it.
30% Slower generation time for a possibly better result