These are 3 Workflows
My workflow is split into three parts. First, I create low-resolution images with SDXL, Zimange, or WAN2.2, mostly using standard ComfyUI workflows. These runs use fewer steps so I can generate a lot of images very quickly. Then I go through the results and sort the good ones from the bad ones.
The second pass is always done with SDXL using img2img with LCM and a BigLust model, keeping the same prompt and adding a 1.5× upscale node. This fixes most of the anatomical errors. You have to adjust the denoise setting depending on the individual case.
The third run uses Flux 4B (small) image edit to push the image toward the style I want. The SDXL upscale tends to make the skin look flat, and this step makes the picture look realistic again. An example prompt is: “Transform the lighting in this photograph into a warm incandescent glow, with amber-toned illumination, cozy high-kelvin warmth, and soft falloff that gently smooths surrounding textures.
Description
FAQ
Comments (5)
Is there any way to incorporate a reference character into this workflow? If you're able to put that together, my life is yours
Theres a lot of ways to do this. You are taking about using a picture as a reference, right?
@metulski Yes, so say I have a picture of Loba from Apex that I want in the end product; what would be the best way to go about doing that? Thanks in advance
do the "starter image first" to get pictures you like. Than use the standard Flux Klein Workflow with this image and the Loba Image with the prompt. Replace the woman from Image1 with the woman from image2. Afther that use the upsale workflow.
What’s the benefit of using SDXL?




