This workflow is the cleaned-up version of what I generally use when I'm rendering videos with Wan 2.2 or 2.1. It includes most of the options and settings that I most commonly use, so it's obviously tailored to work well with my choices and preferences.
If you're an experienced ComfyUI user than I hope you can take away some useful ideas from this template. If you're new to ComfyUI or Wan movie rendering, I've included lots of notes and documentation about the options and features in this workflow, so it may work well for you as a more advanced choice over the basic starter workflows.
This workflow is designed with all the functional parts out in the open so you can follow the process and change things around if you want. I often use this workflow when I'm experimenting with new custom nodes or different processes and it works pretty well for that purpose, too.
I hope you find this workflow useful, and good luck with your video creation!
Description
FAQ
Comments (14)
custom nodes manager can't seem to find the photograin node. looking forward to trying this workflow out
Ugh. That's frustrating. I can't see that the node isn't available for installation in Manager when I already have the node in my system. I'll get the installation notes added to the workflow.
Thanks for letting me know!
@cruffin999111 After removing the custom node from my ComfyUI install, I was able to find and install that node from the Manager without any trouble. The custom node package is called "Advanced Photo Grain" and I'm using the latest release version, 1.4.0. If your Manager can't find it, you can use git pull to install it directly into your custom nodes folder from the github project source: https://github.com/tritant/ComfyUI-Advanced-Photo-Grain
@darkroast175696 @darkroast175696 excellent working on that now, thx for all of your hard work!!
I just wanted to use this opportunity to say thank you.
I've used your workflows in the past to build off of for my own needs.
They're clean and easy to comprehend, and I'm sure this one will be as useful to others as they have been for myself in the past.
Thanks very much! I'm glad all those comments and layouts were helpful to someone. I've always found it frustrating to learn from a workflow where everything is super tightly packed and all the logic is hidden away underneath things, so I try to do the opposite in my workflows. Besides, I'm always messing with it later and I'd rather work on a system where the whole engine is right there where I can see it.
I got curious about the cache node and installed it, but the settings are only letting me put "skip_interval" at 2, instead of 1 which is in your file. I assume the latest version changed that minimal value and now I am not sure how much this influences the output.
Also I got a "The size of tensor a (100) must match the size of tensor b (64) at non-singleton dimension 4" erro while trying to process a 800x1200 image but are still testing to see if it is 100% this node's fault. (It wasn't happening before the node's introduction in my workflow).
Just to give context to the relevant flow:
PainterI2VAdvanced > Cache > KSamplers, with lightx2v LoRAs, 2 high steps, 3 low steps.
If the workflow has the skip interval set to 1 then that's a mistake on my part, it should be 2 at minimum. I'm using warm up of 2 and skip interval of 2 right now with the lightx2v lora and it works well.
I've made some 800x1200 videos with the cache node running, so I know it's possible. I'll see if I can post a video soon so you can reference that workflow in case it helps.
I'm still a beginner to comfyUI, and have not been able to make this workflow work for me. I was wondering if someone could help me troubleshoot.
I've downloaded every node and followed every instruction. Everything runs smooth, but the output is a weird canvas of shapes and looks like nothing. I'm running on a rtx 3060 with 12 VRAM.
Here are some screenshots of my setup, and of a video output.
https://imgur.com/a/tj9E7oU
I'm really hoping to make this work.
@Necessary_Editor It looks like you're using Image-to-Video gguf models instead of Text-to-Video models. You need to download T2V gguf files instead of the I2V files you have listed in the workflow.
@darkroast175696 Worked! Thanks. Honestly a stupid error
@Necessary_Editor No problem. You mentioned you were new to comfyui and it's a lot to keep track of until you get used to it. Have fun!
Would this workflow work for my laptop that has a 5gb VRAM?
I don't know, but I doubt it, unless you make some additional changes for the low vram setup. My system has 16gb and I don't know what you would need to do to make videos with under 6gb. Certainly you'd have to find some small gguf versions of the models and you would probably need to do some block swapping using a special node for that. If I were you I would search for some low vram workflows here or on youtube.
The main thing from this workflow that you could learn from is the Cache DIT node. It speeds up generation a LOT and it doesn't seem to change the output any, as far as I can tell, so once you find a low vram workflow that works for you, you should use the info here to add the cache dit node to that workflow. It'll probably help you out.


