YouTube Tutorial / Walkthrough:
Motion Brush Workflow for ComfyUI by VK!
Please follow the creator on Instagram if you enjoy the workflow!
https://www.instagram.com/vk.amv?igsh=YjU5M2Y4ZjRqam43&utm_source=qr
Check out our Twitch VOD on YouTube for Instructions!
Description
FAQ
Comments (45)
🔥🔥🔥
Working, but output all blurry, nothing like yours, not sure why ill look into it, thanks.
Damn, if you figure out please let me know. It could be any combination of things. What checkpoint / animatediff model are you using?
@jboogx_creative @scraggydog I swapped the hires script settings to: upscale_type → both and used the Remacri upscaler with lower denoise (less than 0.6) and it seems to have fixed the blurriness issue. I don't think I changed anything else, pretty much same settings and models as the default workflow.
@Pitpe11
Many thanks, that corrected the issue, I'm just running the skulls now (Thx Tyler) as a test, and can already see the results coming through the k-sampler are much better,
I've been scratching my head with this one turning all kind of settings with no joy so huge thanks, I'm one of those where i find it hard to rest when i have an issue I cant solve. And I'd have not have got that one without you cheers.
@jboogx_creative Can confirm, https://civitai.com/user/Pitpe11 was spot on with his skills, I just did a test run with his settings and its ran great on PhotonLCM, so assume its all good for any now, ill try and see with Comicraft LCM next, so many thanks to you both for your work you put in. MUCH appreciated, Now the fun starts..
Hi all, by using Remacri, I got this error message: model.diffusion_model.input_blocks.0.0.weight, any idea? Thanks for your help
Cant find LiquidAF-0-1.safetensors
Sorry, that's a custom trained motion lora :) If you search for Motion Loras on the site, you'll find a bunch to download to drive your motion!
Which folder do you put the ip adapters in comfyui folders?
Amazing workflow, but has anyone managed to get a perfect loop out of this? I've been playing with context options but it's still leaves just a liiiittle to be desired compared to other flows! Other than that the results are 🔥🔥🔥
unfortunately the loop doesn't seem to be perfect :'(
Almost perfect.... swap out the Context Options node to the "Looped Uniform" variant ;) check my post for an example.
Error occurred when executing ADE_ApplyAnimateDiffModel: self must be a matrix
Can someone please help me fix this? I am using animatediffMotion_v15.cpkt as model
File "D:\ComfyUI Editstuite\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI Editstuite\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI Editstuite\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI Editstuite\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\nodes_gen2.py", line 121, in apply_motion_model load_motion_lora_as_patches(motion_model, lora) File "D:\ComfyUI Editstuite\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 1018, in load_motion_lora_as_patches patches[model_key] = (torch.mm( ^^^^^^^^^
I get this error:
'NoneType' object has no attribute 'lower'
Any idea how to solve this?
I solved that change model
Maybe, you put wrong ckpt
A tutorial for this that isn't just an hour-long livestream archive video would be great. Text-based is my own preference...
can someone share the link to download the ClipVision sd1.5\pytorch_model.bin file used in this video demonstration pls?
@bin_bi thank you @bin_bi
There's no such file 250_wavpuls_BNX_r64_temporal_unet.safetensors
Yeah, that's a personal motion lora :)
Just gotta make / download your own!
@jboogx_creative hello, may I ask “700_eyes_temporal_unet” "fire_motion_mmv3_v1" that were also trained by yourself?
@unlessnight367 try to search youselt and findout
@JudyWen Thanks for u reply, but I try to search them in huggingface and github, I didnt find them
I think the part without the mask like face is also changing, can you give me some advice on this part?
We need this to loop, so bad 😭 Looping is a must! 🙏🏻☹️
animatediff nodes context_option, pick the loop one and set the closed_loop to ture
@JudyWen Awesome thank you
@huwhitememes Looked for JudyWen answer, but still couldn't find it. Any addition info you can give me on you found it, would be most helpful, thanks! BTW, 'Loved' your Parrishstyle piece!
@Streamshow, You open the workflow. Go to the blue “Output” field. Here you will find a note called "Context Options", delete it. Then double click to add a new note. Here you search for “Context Options Looped Uniform”. Connect these with "Use Evolved Sampling. In the note "Context Options Looped Uniform" activate "closed_loop". Have fun :)
Missing Clipvision model?
I downloaded everything, i assume that all is in correct folders, but still getting this error?
Can someone help? What and where must be to correct this error? Thank you!
Can we please make a new update for this workflow? this workflow is unique, if it could be made better, it would be great.
I'll take a look at it again. I was not originally the one who put it together, but I'll see if we could optimize it somehow
@jboogx_creative So looking forward to this!👌😎
Excellent - any updates?
hiresfix results very blurry please update workflow regarding hiresfix
How to upscale this kind of video ?
topaz video upscaler works for me
*****Missing Node Types When loading the graph, the following node types were not found FileNamePrefix *****Can someone help? What and where i have to change to correct this error? Thank you!
Solved with Claude Ai >:-)