How to use this workflow
It looks best on 1440p monitor. If you have a monitor of different size, you can modify zoom level in bookmark nodes to fit your needs.
Navigation
To navigate between sections use numbers [1-6].
Info (1)
I don't expect you would waste too much of your time for info screen, so stick to the Main workspace after you learn how to use this.
Main workspace (2)
Here is the place where you write your prompt, choose output filename, switch Ollama ON and OFF, and preview your images and final prompt.
Prompt (3)
Ollama setup and final prompt is prepared here.
Models & LoRAs (4)
You can choose Turbo models here, both Checkpoints (AIO) and Diffusion.
ā” CacheDiT Accelerator node speeds up generating process.
Pay special attention to Model Patch Torch Settings node. If your graphic card does not support pytorch, simply bypass or remove the node.
Magic šŖ (5)
Here you can choose resolution and aspect ratio and orientation of your image. I suggest sticking to 1 megapixel. SeedVR upscaler is quite good at its job.
Upscale & Save (6)
The upscaled image is saved in the following location: ComfyUI/output/Z-Image Turbo/[CURRENT DATE]/. All generation metadata is embedded into the final image. No problem with CivitAI reading metadata. All thanks to Save Image with Metadata Universal node.
Notification sounds (optional)
Place my custom [notification sounds](https://drive.google.com/drive/folders/1efJS9O5slbArdEV7SFw6x_9kGPGtL5tF?usp=sharing) in ComfyUI/custom_nodes/comfyui-custom-scripts/web/js/assets/
---
The rest of the workflow is self-explanatory, I hope.
Enjoy!
Description
dropped SubGraphs
cleaned layout
updated info page