This uses ComfyUI native SAM 3 nodes to segment the image into masks, and then https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch nodes to upscale for inpaint editing and then stitch it back to the original image. It's an old idea from the SD 1.5 times, but it's still faster than using an editing model and can be useful for plenty of stuff, like improving faces in the background (or even the foreground) - as such, it works as an alternative to face detailer and segs detailer, with the more modern SAM 3 model.
I've used Z-Image Turbo for this workflow, with the gguf clip, but you can change it for any other model you like. There are reroute nodes for the model, clip and vae if you need them.
Besides lquesada's inpant crop and stitch custom nodes, it uses the https://github.com/city96/ComfyUI-GGUF nodes for the clip and https://github.com/pythongosssss/ComfyUI-Custom-Scripts for a sound notification at the end. If you use safetensors for the clip, you don't need the gguf loader and if you don't want the notification sound, you can simply delete that node
Description
V1: First release
