Inside:
Wan 2.2 14B:
T2I Native
T2V Kijai
I2V Kijai
I2V extended (as long as the video don't degenerate too much)
Fun Control (Depth, Pose, Trajectories, Ref. Image, Start image)
Qwen Image
Qwen Image + Wan 2.2 Refiner
Flux Kontext
Chroma T2I
After reinstalling ComfyUI and discovering the many new models now available, I decided to structure my workflows in a way that keeps them as clear and flexible as possible. The goal is to minimize searching, quickly adjust the most important settings, and still achieve high-quality results.
I’ve also added several functions I occasionally use, such as face swapping or saving the last frame to easily continue with a follow-up video. To make use of these workflows, it’s enough to install a few custom nodes like Video Helper, WanVideoWrapper, or KJNodes.
For prompts, I rely on a prompt enhancer powered by a large model via OpenRouter. It’s straightforward to set up, supports NSFW, saves VRAM, and delivers excellent results. However, you’ll need an API key, or you can switch to another API—or simply remove the node and stick with manual prompting. The LLM-Party custom nodes are highly versatile as well, making it easy to switch over to a local LLM if needed.
I use multiple switch nodes to streamline workflows and ensure that the correct sizes, images, and prompts are always applied—even if you forget to disable a group. Thanks to rgthree nodes, entire groups can be toggled on or off with ease. I also recommend Res4LYF, which gives access to samplers like res_2s and schedulers such as bond_tangent or beta57.
Models and LoRAs can be found and downloaded primarily from Hugging Face (especially from Kijai or Comfy-Org) or from Civitai.
Please note: these workflows are not intended for complete beginners, as I deliberately omit explanatory info nodes.
If you have any questions, feel free to reach out.