Output example of using both workflows: https://civarchive.com/images/97448768
Credit goes to DonutsDelivery & Daxamur, refer to them for the unmodified workflows by the creators: https://civarchive.com/models/664292 & https://civarchive.com/models/1853617
These are the production workflows that I use to generate the images in SDXL and then use the image for the Wan2.2 I2V videos.
My production workflow is text2img SDXL -> img2video Wan2.2
This will require installing custom nodes, model files and may require you to do manual node installs from github. Its been awhile since I've installed all the dependencies so it may be a pain in the ass but as of 8/31/2025 these workflows work.
All nodes, dependencies, and software is the most current versions.
Tested on Windows OS RTX 5090 with latest CUDA + Torch + Sage
Description
This workflow is technically not ideal as it does not correctly balance the steps for the high and low noise models. The steps should be split based on some formula that is discussed here: https://www.reddit.com/r/StableDiffusion/comments/1mkv9c6/wan22_schedulers_steps_shift_and_noise/?tl=fr
It was just to much of a pain to figure out.
Credit: Daxamur https://civitai.com/models/1853617