Version 1.3 out
Revamp face consistency fix.
Make sure to download https://github.com/kaaskoek232/IPAdapterWAN/archive/refs/heads/master.zip
Detail also included in WF.
Reach out to discord if need help.
As for V1.3, these are all the custom nodes im using
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
https://github.com/cubiq/ComfyUI_essentials
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
https://github.com/willmiao/ComfyUI-Lora-Manager
https://github.com/Smirnov75/ComfyUI-mxToolkit
https://github.com/ltdrdata/was-node-suite-comfyui
https://github.com/teward/ComfyUI-Helper-Nodes
https://github.com/kaaskoek232/IPAdapterWAN
Version 1.2 is out.
This release comes with 3 additional features.
Face detailing with BBOX detector.
Color Matching to ensure video colors matches with reference image.
Temporal/Structural attention customizations.
Make sure to get some additional custom nodes:
https://github.com/rslosch/comfyui-nodesweet
https://github.com/ltdrdata/ComfyUI-Impact-Subpack
https://github.com/ltdrdata/was-node-suite-comfyui
https://github.com/teward/ComfyUI-Helper-Nodes
Make sure to check out my video extension workflow: https://civarchive.com/models/2035036/lazy-wan-22-v2v-video-extension-workflow
Join me on my Discord to ask questions, talk AI and give feedbacks.
As requested by several people, here is my I2V workflow that allows me to generate all my videos.
This is a Wan 2.2 I2V workflow.
There are 6 custom nodes in my workflow, make sure to get them before using.
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
https://github.com/cubiq/ComfyUI_essentials
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
Description
Adding face detailing and strucural/temporal attention adjustments
FAQ
Comments (16)
where i can download the model wan22enhancedlightning_wan22Hight.safetensors,
Couldn't get this to work at all for me. Even with the default settings and the exact models. Only produced black screens or grey static outputs.
Edit - How to fix the workflow:
The bad output problem appears to be the structure nodes. Disabling them fixes the output.
Replace the clean and clear cache nodes hiding under the video output node with VRAM Debug from kjnodes.
The clip model and the lightning models are not entirely compatible. While it will still work as-is, switching to the normal non-nsfw clip model works just as well and doesn't throw errors in the console.
@ultimo_intento
please share system spec, comfyui launch flags and make sure clip and model fits.
me either. wouldnt even function compared to previous
@bigjohnsono0o192 please make sure you get all the additional custom nodes
@Chriqro The output problem appears to be the structure nodes. Disabling them gives the expected output.
There's another issue where the workflow can run once, and if you switch to another image and set the quality to 100% (eg you switch from a big image to a small image), it will throw an error at the math node and hang forever.
There's also an issue with compatibility between the clip model and the lightning models. While it will still work, switching to the normal non-nsfw clip model works just as well and doesn't throw errors in the console.
Also, the workflow won't release all the memory at the end so I replaced the clean and clear cache nodes hiding under the video output with VRAM Debug from kjnodes (setting everything to true), and that fixes the memory issues. For reference, I have 128GB of VRAM, and my startup script is aimed at proactive memory recycling, so it's definitely a workflow problem, as I run similar styles of wan wfs without hitting OOM, endlessly.
Can't run the new version, 1.1 works fine tho
nice work. thanks for this beautifull wf
thanks for your awsome work.i'm confused when running the workflow, it logs :unet missing: ['text_embedding.0.scale_input', 'text_embedding.2.scale_input', 'time_embedding.0.scale_input', 'time_embedding.2.scale_input', 'time_projection.1.scale_input', 'blocks.0.self_attn.q.scale_input', 'blocks.0.self_attn.k.scale_input'...
it could work and output a video,but the quality is really bad. not as good as your Promo Video. could you give me some advice
High resolution, high frame rate, smaller chunking... probably
this is by far the best workflow I have used. dead simple, works and produces just as equally great videos as all the others that are so immensely more complex. one thing tho, this last update is not working for me. produces videos that may start well and end up all blurry. also, I couldnt find the enhanced lighting version of wan 2.2 you have in it, maybe im getting that because I switched back to normal wan ? please post where to get these variants so maybe I can try that out. thanks and thanks for sharing and saving us so much time!
@8bitglam the lighting 4steps lora got disabled by default on the 1.2
please enable the lighting lora node and that will resolve your issue.
I have difficulty to install the Helper-Node look like archives. when in copy in custom_nodes is not detected in the Lazy workflow... (Another Nodes installed with success) .https://github.com/teward/ComfyUI-Helper-Nodes
right, and how they use this workflow if the nodes already dead in july 2024... nobody answer it.. this workflow is something wrong
i had the same issue, i was able to fix it working with grok in just a couple prompts. basically it comes down to Upgrade pip + Install docopt + Install whratio. Idk if you know how to do those or if grok can walk you through it, i can share the grok convo in a DM if you need it