⚡ Illustrious Workflow as of version 18
🛠️ Purpose & Design Philosophy
This workflow is a high-fidelity environment built for Illustrious XL. It prioritizes stability and professional texture over generation speed. It follows an "all-in-one" philosophy: configure your prompts, hit queue, and let the workflow handle the multi-stage refinement from start to finish.
Not for Speed: This is a heavy-duty refinement tool. If you want 2-second previews, use a basic SDXL workflow.
Personal Use: Built for my specific production needs. It is shared as-is for those who want a "set-and-forget" pipeline for Illustrious.
All-in-One Logic: The workflow handles generation, detailing, and upscaling in one continuous pass.
🚀 Key Features & 2026 Logic
Global Model Patching (RescaleCFG): Includes a pre-configured RescaleCFG patch (Multiplier: 0.7) applied globally. This acts as "HDR Insurance," preventing the "deep-fried" or over-saturated look common in high-CFG Illustrious runs.
Detail Daemon Sampler: Integrated to enhance structural depth. In this version, it is tuned to start at 0.4 to preserve the core Illustrious character proportions while sharpening hair and eye details.
Hybrid Upscale Strategy: * Group Bypass Switch: Easily toggle between a Pixel-only (Lanczos) path for flat anime styles and an Upscale-Model path for 2.5D/highly detailed renders.
Ultimate SD Upscale: Re-draws the upscaled canvas at a 0.35 denoise to lock in fine textures.
Power LoRA Loader: Manage multiple Illustrious-specific LoRAs without messy wiring.
Triple Detailer Groups: 3-stage targeted refinement for faces, hands, and clothing using standard detection models.
CivitAI Meta-Sync: Images are saved with full metadata (Model, LoRAs, Sampler info) for automatic site parsing.
⚠️ Disclaimer & Compatibility
Install at Your Own Risk: Custom nodes can break your environment. I am not responsible for troubleshooting your specific installation.
ComfyUI Portable: Built and tested on the Portable version. Desktop app users may face additional hurdles.
The "Your Version" Factor: Your node versions and environment are 99.9% likely to differ from mine.
Nodes 2.0: I do not recommend using Nodes 2.0. It creates unpredictable UI behavior; I will not provide support for issues involving this feature.
🤝 Support & Boundaries
No DMs: DMs are disabled due to repeat spam. Please check the Discussions tab below; most questions have already been answered.
Modifications: You are free to hack this workflow apart. However, you are responsible for fixing it if it breaks.
Custom Requests: I do not make private workflows. If you need a custom solution, post a Bounty on CivitAI. There are many talented creators ready to help you for a fee.
Description
Updated file to have a single default version plus image examples of various settings that can be drag and dropped into your comfyui on your browser.
The image examples should cover most usage scenarios.
v5b changes:
Dropped ComfyUI-Adaptive-Guidance
Did not seem beneficial enough to keep in the workflow
To make full use of it, I would have to create a toggle for the normal node and the negative node version at a minimum.
I got better results when just using a standard guider node in many cases.
Added a switch from ComfyUI_Comfyroll_CustomNodes that allows the IMG2IMG group to be bypassed.
This node just changes the latent source going into the first KSampler.
You will still have to have an image placed in the Load Image node AFAIK, but you can try not having one there and see if it works.
Added a switch to allow for either latent upscaling or upscaling image with model below the 1st KSampler.
This affects what latent source feeds into the 2nd KSampler.
The 2nd KSampler by default is set to 1x Upscale, but you can adjust it to a higher number. I use it as a 2nd pass KSampler.
I've included 3 versions of the workflow with different settings.
The one I used to make the sample images will be the demo_settings version.
With everything turned on and USDU set to Half-tile: it takes about 4 minutes from start to finish on my 3060.