This is a simple WAN2.2 Image to video and text to video workflow that I've pieced together and modified. SageAttention can be disabled but its highly recommended. It vastly improves generation times. This workflow has been my goto. I've added Diffusion models and GGUF models to the I2V and T2V sections. You will need to disable one or the other. To change simply match the model outputs connecting to SageAtt. Credit goes out to @Akalabeth or the original workflow. This person is very talented, I highly recommend giving them a follow.
Description
FAQ
Comments (28)
Doesn't work... wanBlockSwap error ("NoneType", object has no attribute "clone") but gguf has installed
How much system RAM do you have?
@dreadfulpirate 12gb gpu + 32
@atral340478 Hello. Are you using the T2V or I2V? Each section has two sets of base models, one set is the diffusion model and the other set is GGUF model. Can you make sure you've disabled/bypassed the unconnected model?
Working perfectly. Thank you
Works great for T2V. But I2V gives me this error "cannot import name 'sageattn_qk_int8_pv_fp16_triton' from 'sageattention'" Tried clean install of comfy, sage, and triton but no luck. And disabling sage via the switch gives me this error "("NoneType", object has no attribute "clone")".
Hello. Are you using the T2V or I2V? Each section has two sets of base models, one set is the diffusion model and the other set is GGUF model. Can you make sure you've disabled/bypassed the unconnected model?
I ended up getting this to work by switching the patch sageattention nodes to auto within the I2V block. Many generations now and its working good. And just to answer the question I was trying to do I2V, I was using GGUF models and had disabled the other model nodes.
@miketheninja446 I'm glad you got it working. Thanks for the update.
Enabled i2v
disabled both Unet Loader GGUF
Enabled+connected both Diffusion model (wan2.2_ti2v_5B_fp16.safetensors)
SA enabled
KSamplerAdvanced
Expected all tensors to be on the same device, but got weight is on cpu, different from other tensors on cuda:0 (when checking argument in method wrapper_CUDA__native_layer_norm)
I ran into this problem before. The only way I resolved this was by closing comfyui and restarting. Let me know if that works or not.
@dreadfulpirate got a feeling that I'm using a wrong model? (non gguf). Using gguf output is... broken, output is brown.
@durachell I will test it out on my end today.
@durachell I have not been able to generate any errors. What are your system specs?
@dreadfulpirate Thanks for the follow ups, I don't think it's specs, but I have 3060 12GB VRAM. I'm new to GGUF and ti2v generation and comfyui. Consider me as a rookie.
@durachell Were you able to resolve the issue?
@dreadfulpirate may I know the version of comfyui you use?
@durachell Currently, ComfyUI Manager V3.39.2.
@dreadfulpirate Thanks for replying. I mean the base comfyui version, not the custom nodes or other nodes.
@durachell No problem, glad to help.. Is this the information you needed?
ComfyUI 0.11.1
ComfyUI_frontend v1.37.11
Discord
ComfyOrg
LoRA Manager v0.9.14-stable
EasyUse v1.3.7
rgthree-comfy v1.0.2512112053
ComfyUI-Manager V3.39.2
System Info
OS
win32
Python Version
3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]
Embedded Python
true
Pytorch Version
2.9.1+cu130
Arguments
ComfyUI\main.py --disable-auto-launch --force-upcast-attention
RAM Total
31.94 GB
RAM Free
19.32 GB
@dreadfulpirate ok so, I need to reboot the machine to solve node issues. All good. Thanks.
works well but for some reason the generated videos try to end close to the starting image even when I'm not prompting for it. eg. if the starting image is a woman's face and my prompt is: a woman turns around (simplified in this example of course). the video will have her turn around and the video should end but instead she turns back around and shows her face
Anyway I can see your results?
@dreadfulpirate not sure how to upload results but it has to do with the length of the video. at 8 seconds it always resets to close to the starting image. anything above 8 will have the video reset close to the starting image at 8 seconds and then do another second (if set to 9) or 2 (if set to 10) of animation. setting it to 7 seconds avoids the 8 second reset. not sure why it does that at 8 seconds or if there's a workaround
@ceelogreen855 I will have to look into it. Personally I've never tried generating 8 seconds of video. Also I am getting ready to post a new workflow. Has many improvements.
I just cant get it to work
Whats the problem?
Not found nodes error
JWIntegerDiv
JWFloatToInteger
JWImageResizeByLongerSide