⚡ Illustrious Workflow as of version 18
🛠️ Purpose & Design Philosophy
This workflow is a high-fidelity environment built for Illustrious XL. It prioritizes stability and professional texture over generation speed. It follows an "all-in-one" philosophy: configure your prompts, hit queue, and let the workflow handle the multi-stage refinement from start to finish.
Not for Speed: This is a heavy-duty refinement tool. If you want 2-second previews, use a basic SDXL workflow.
Personal Use: Built for my specific production needs. It is shared as-is for those who want a "set-and-forget" pipeline for Illustrious.
All-in-One Logic: The workflow handles generation, detailing, and upscaling in one continuous pass.
🚀 Key Features & 2026 Logic
Global Model Patching (RescaleCFG): Includes a pre-configured RescaleCFG patch (Multiplier: 0.7) applied globally. This acts as "HDR Insurance," preventing the "deep-fried" or over-saturated look common in high-CFG Illustrious runs.
Detail Daemon Sampler: Integrated to enhance structural depth. In this version, it is tuned to start at 0.4 to preserve the core Illustrious character proportions while sharpening hair and eye details.
Hybrid Upscale Strategy: * Group Bypass Switch: Easily toggle between a Pixel-only (Lanczos) path for flat anime styles and an Upscale-Model path for 2.5D/highly detailed renders.
Ultimate SD Upscale: Re-draws the upscaled canvas at a 0.35 denoise to lock in fine textures.
Power LoRA Loader: Manage multiple Illustrious-specific LoRAs without messy wiring.
Triple Detailer Groups: 3-stage targeted refinement for faces, hands, and clothing using standard detection models.
CivitAI Meta-Sync: Images are saved with full metadata (Model, LoRAs, Sampler info) for automatic site parsing.
⚠️ Disclaimer & Compatibility
Install at Your Own Risk: Custom nodes can break your environment. I am not responsible for troubleshooting your specific installation.
ComfyUI Portable: Built and tested on the Portable version. Desktop app users may face additional hurdles.
The "Your Version" Factor: Your node versions and environment are 99.9% likely to differ from mine.
Nodes 2.0: I do not recommend using Nodes 2.0. It creates unpredictable UI behavior; I will not provide support for issues involving this feature.
🤝 Support & Boundaries
No DMs: DMs are disabled due to repeat spam. Please check the Discussions tab below; most questions have already been answered.
Modifications: You are free to hack this workflow apart. However, you are responsible for fixing it if it breaks.
Custom Requests: I do not make private workflows. If you need a custom solution, post a Bounty on CivitAI. There are many talented creators ready to help you for a fee.
Description
Added a very basic ControlNet & IMG2IMG group (same as 13e)
If you don't know how to use ControlNet or IMG2IMG, there's plenty of info out there.
ControlNet:
I basically don't use it and when I do, it's just as a more complicated version of IMG2IMG.
TL;DR for Image 2 Image in this workflow:
Put your source image into the Load Image node.
Adjust the denoise to something below 1.0.
Change the toggle on the Latent Switch to 2.
Click Run.
As previously mentioned:
The Global Seed and Global Sampler nodes will override any of the settings related to them across the workflow.
Using a sampler or scheduler not compatible with any of the nodes will result in an error.
You can bypass or delete these two nodes if you want to do these settings manually on each node.
CFG is tied to the “CFG” node to the left of the KSampler.
Delete this node if you want to set CFG manually on each node.
The detailers can be used for whatever you want as long as you have the detections models.
The detailer nodes used in this workflow take all detected items and do them at the same time.
You can use whatever detection model you want. You don't have to use what I use.
If you’re using the Nodes 2.0 feature: you are on your own.
Custom nodes used in this workflow: