Version 1.3 out
Revamp face consistency fix.
Make sure to download https://github.com/kaaskoek232/IPAdapterWAN/archive/refs/heads/master.zip
Detail also included in WF.
Reach out to discord if need help.
As for V1.3, these are all the custom nodes im using
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
https://github.com/cubiq/ComfyUI_essentials
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
https://github.com/willmiao/ComfyUI-Lora-Manager
https://github.com/Smirnov75/ComfyUI-mxToolkit
https://github.com/ltdrdata/was-node-suite-comfyui
https://github.com/teward/ComfyUI-Helper-Nodes
https://github.com/kaaskoek232/IPAdapterWAN
Version 1.2 is out.
This release comes with 3 additional features.
Face detailing with BBOX detector.
Color Matching to ensure video colors matches with reference image.
Temporal/Structural attention customizations.
Make sure to get some additional custom nodes:
https://github.com/rslosch/comfyui-nodesweet
https://github.com/ltdrdata/ComfyUI-Impact-Subpack
https://github.com/ltdrdata/was-node-suite-comfyui
https://github.com/teward/ComfyUI-Helper-Nodes
Make sure to check out my video extension workflow: https://civarchive.com/models/2035036/lazy-wan-22-v2v-video-extension-workflow
Join me on my Discord to ask questions, talk AI and give feedbacks.
As requested by several people, here is my I2V workflow that allows me to generate all my videos.
This is a Wan 2.2 I2V workflow.
There are 6 custom nodes in my workflow, make sure to get them before using.
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
https://github.com/cubiq/ComfyUI_essentials
https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
Description
Added Face consistency fix
FAQ
Comments (42)
We hope to add an upscaling plugin in the future to create high-resolution videos.
I still haven't figured out how to add lora to this fucking node!!! Why complicate things so much???
@dirtysem i use lora manger since i have too many lora.
just look for the "L" icon on the comfyui taskbar.
you literally just have to type the first letters of your loras in the text box and select from the drop down menu...if this is complicated try simpler workflow for now.
@Chriqro thank you I didnt know about that! on the latest version one of the modeling sample nodes is red and wont work. didnt get an error for it
in your XP, whats the best wan base model to work with (nsfw)
@juliusmartin these days i'm mostly using https://civitai.com/models/2053259/wan-22-enhanced-lightning-edition-i2v-and-t2v-fp8-gguf?modelVersionId=2379693
Version 1.3 is awesome. The face consistency works great. Thank you! a question, the videos are coming in slow motion. How can I fix that?
I didn't check out v1.3 yet personally, but if he has Interpolation in there it basically doubles the frame results. If you use interpolation factor 2.0 you'll probably want to set a higher framerate in the end like 32. Or just disable interpolation.
IPAdapter install doesn't work, link's broken on github, installing the normal way by just cloning all the links work, but it still gives a missing nodes warning for IPAdapterWANloader and applyIPAdapterWAN.
The clip vision model download link also doesn't work.
ComfyUI manager can't find these required nodes either.
@Chriqro Re-read what they've written. Errors still show up even after the install of IPAdapter is done by following the instructions. In my case this happens on a completely fresh ComfyUI install. The repo is cloned, the additional files are placed in their folders but IPA is still not detected and the console does not mention anything about the module.
@Chriqro I have the same problem, can you clarify how to install IPAdapterWan? The github instruction is obviously wrong because the address "your-username/ComfyUI-IPAdapter-WAN.git" doesn't exist. I tried just cloning the repo and put it in the custom nodes folder. It still doesn't work.
@renochew try to download here
https://github.com/kaaskoek232/IPAdapterWAN/archive/refs/heads/master.zip
@Chriqro I downloaded the link from the interface of this work, and it's the same as the link you sent. I copied the folder to the custom nodes folder, but it still can't be recognized. I believe I'm not the only one encountering this issue.
I am also having an issue where the ipadapter nodes are not being detected by comfy. I manually installed all other nodes with no issues, only this one is fucked
It works for me. I just downloaded the zip, extracted the folder into custom nodes, and removed the "-master" from the folder's name. The folder should be just called "IPAdapterWAN"
@Liveon3 @artyenjoyer3638 @wqz0777640 I believe I solved, at least for me. You need to make sure your folder isn't nested. But most importantly the step I missed was
Download the IP-Adapter weights:
ip-adapter.bin
Place it in: ComfyUI/models/ipadapter/
Problems encountered while cloning nodesinfo: please complete authentication in your browser...
remote: Repository not found.
fatal: repository 'https://github.com/your-username/ComfyUI-IPAdapter-WAN.git/' not found
I'm just getting acquainted with your workflow which works great, especially as a new user! I'm curious if it is possible to manually specify a reference image to maintain facial consistency?
@amjorgen22662 I didn't really include that in the I2V flow, as the first video generally don't have too much issue with face consistency. But i did include that in the video extension workflow. Is there a reason why you want to manually specify it?
@Chriqro I'm creating more than 1 video and splicing them together via ffmpeg. As there are more clips generated, I seem to have a harder time keeping consistency with the original source image. This is not a big issue at all. I was just curious if the concept I am describing is possible/exists. Thanks for responding =)
Additionally, I have an image where there is a subject obscured in the background that I would like to provide a reference face for when I bring them into focus. I may be shooting for the clouds and that's fine. I'm just having fun.
@amjorgen22662 understood
re what you mention on the obscured subject, if your orignal source is an image.
from a workflow point of view, it would be easier to add detail to their face via a i2i workflow firstr before generation.
however, if you are referring more to "adding the face detail" in during the video, i think something like that might need specific LORA to do and not so muh face consistency.
Interesting concept though, i have never tried it myself but now i might give it a try to understand how this could work.
by the way, if you use discord, you should join: https://discord.gg/TrB5PQR6mU
@Chriqro Yeah I haven't seen anything like it either. I'll keep my eyes peeled for a solution.
@Chriqro does that mean that the extension workflow is more advanced in terms of maintaining chracter consistency? so modding it a i2v will create the optimal WF?
This is a great workflow. Thank you for your efforts.
Is it possible to manually set the 'seed' to maintain consistency?
If so, how can this be done?
In the process section if you expand the nodes named First Pass and Second Pass, the seeds are located there.
@pr1medebauchery573 Thanks to you, I found it! :-)
试了一下,人物一致性保持的很好,这是我用过最好的工作流谢谢您的分享!期待您能做一个长视频的工作流
Struggling to get HelperNodes_SchedulerSelector to install. I keep clicking install and ComfyUI prompts me to restart the program, but once I do it prompts me to install it again. I checked my nodes folder and it's in there so I'm not sure whats happening.
same problem here, did you fixed it?
Hi!
Does IPAdapter_WAN change something? From what I saw - it doesn't work. I mean, I tried it in different wf's. Yep, if face clearly visible in every first and last frame - degradation still is but not such visible (try to add couple of clips with 33sec and you will see what degradation is). But native wan can do that also without ipadapter (which is doesn't works? anyways).
How much ram/memory does this use? I have a 4090 and it says I am out of ram. Where do you go to change the steps in this workflow? Or the sampler name or scheduler?
Is there a way to tweak settings or prompt different to make use of the longer video length slider without having it loop back to the original frame? Or is the only way right now to use the other V2V workflow to extend?
Fantastic workflow and love the face consistency nodes. Have you thought about adding 2 high and one low pass I've heard the using the 3 passes 2 high and one low gets better results. I was going to try to add a 3rd pass to your workflow to try it out myself but still learning how to do workflows myself.
I can’t add this LoRA to the LoRA Manager node at all.
When I try to type anything in the LoRA field, no characters appear.
I think i had the same problem. You have to click on the letter L to the left of “manager,” then download the Lora from here (I get an error message, but when I click on “refresh” on the same page, the Loras are added). Try
is there a way to bypass the consistency group it doesn't seen to work with character LoRA
Hello, I'm using the Lazy WAN 2.2 I2V v1.3 workflow on Civitai.
I'm looking for the ipadapter_wan_faceid.safetensors file compatible with the IPAdapterWANLoader node.
Could you please provide me with the download link for the complete WAN package containing this file? Thank you!
Hi, I’m using your Lazy WAN 2.2 I2V workflow. I’m missing the ipadapter_wan_faceid.safetensors file. Could you share the WAN pack required for the workflow? merci
Got it working eventually.
Make sure the requirements are met.
You need to upgrade the certain libraries.
Make sure to adept it to your structure but in my case this was the command to update them using CMD/PowerShell:
D:\ComfyUI_portable\python_embeded\python.exe -m pip install --upgrade huggingface_hub diffusers einops