Overview
This is a very simple and straightforward workflow that I use to generate the videos I've shared on my account. Out of the box this workflow is good for PC's that have 11GB VRAM and 64GB RAM. I've provided notes in the workflow with helpful tips including ways to reduce VRAM usage, Suggested Settings, and Quick References. If you have any questions feel free to ask.
4 Versions
Tiny Image and Video - My default workflow that does both image and video.
I2V Comfy Core - This is my image/video workflow, but doesn't require custom nodes.
Full Video - A full video workflow only.
Tiny Video - A very simple tidy version of the full video workflow.
LoRA's
https://huggingface.co/lightx2v/Wan2.1-I2V-14B-720P-StepDistill-CfgDistill-Lightx2v/tree/main/loras
Negative Prompt Translation
Vibrant colors, overexposure, static, blurred details, subtitles, style, artwork, painting, still image, overall grayness, worst quality, low quality, JPEG compression residue, ugly, mutilated, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still image, cluttered background, three legs, crowded background, walking backwards,
Description
This version of my workflow is the exact same setup, but it has been simplified using a subgraph node to help clean things up.
FAQ
Comments (14)
some custom node is missing , i how to find it
pls
Hello, if you have ComfyUI manager installed you can open it and click on "Install Missing Custom Nodes". Or alternatively you can also use my Comfy Core workflow which doesn't require any custom nodes.
How can I get better fidelity? Feels like the output is somewhat low quality
A few things will depend on how strong your PC is. Some options include using a higher resolution, using WAN 2.2's bigger models, increasing the amount of steps, lowering the LoRA's strength or removing them. This workflow is specifically designed for a fast generation with low vram so if you're wanting a higher quality you'll be looking at a higher generation time.
Hey, been using this workflow for a while and it's great, thanks. If you didn't see, the new Lightx2v lora dropped. Have you tried it out yet?
Glad you're enjoying it. I'm testing out the new LoRA along with a potential video extension via WAN Animate. If the LoRA or video extension are good options I'll update the current workflows and create a separate workflow for extending videos.
@HungryBoba
I haven't had time to test it too much, but the suggestion from here for the new Lightx2v for high and the rCM for low seems to be good at first glance.
https://www.reddit.com/r/StableDiffusion/comments/1o67ntj/new_wan_22_i2v_lightx2v_loras_just_dropped/
@HungryBoba Did you get a chance to create a separate workflow? i am using yours and its so clean. thanks for your help!
Very underrated. This workflow has given me some of the absolute best results so far.
Finally a workflow that's simple to setup and works. Massive thank you man.
Happy to help, enjoy!
@HungryBoba Would you ever consider making a workflow that would extend these videos? I have one already that uses Wan 2.2 animate but I find I lack the experience to get the best results. (Each extended video clip slows the video down and saturation lowers.) The workflow I use is insanely complex for someone of my level.
In my opinion you give workflows that are actually simple to use immediately with great settings.
@nostela Unfortunately that's one issue most people can't resolve as there's usually some caveat such degradation or motion not syncing up. I personally work around the limitation instead of trying to force it to do something it struggles with such as different camera views similar to any film/tv show you might watch.
@HungryBoba Have you tried SVI Pro 2.0? It's quite good.