WELCOME TO THE JBOOGX & MACHINE LEARNER ANIMATEDIFF WORKFLOW!
Full YouTube Walkthrough of the Workflow:
1/8 UPDATE
Added ReActor Face Swap to Low-Res & Upscaler. Added Bypass / Enable Toggle Switches using rgthree node pack.
12/7 UPDATE
About this version
v2 brings a few quality of life changes and updates.
I've separated all of the ControlNets into individual groups for you so that you can bypass the one's you don't want to use with ease.
In the IPAdapter, please download and place this file in your comfyui\clip_vision directory. This is for the 'LOAD Clip Vision' node.
https://drive.google.com/file/d/13KXx6u9JpHnWdemhqswRQJhVqThEE-7q/view?usp=sharing
You can find the IPAdapter Plus 1.5 model for the 'LOAD IPAdapter Model' node here.
https://github.com/cubiq/ComfyUI_IPAdapter_plus
If you don't want to upscale, then bypass all Upscale groups on the bottom right.
That should be it :)
Please tag me in anything you make using the workflow and I will share on my social!
@jboogx.creative on Instagram
---------------------------------------------
DISCLAIMER: This is NOT beginner friendly. If you are a beginner, start with @Inner_Reflections_Ai vid2vid workflow that is linked here:
https://civarchive.com/articles/2379/guide-comfyui-animatediff-guideworkflows-including-prompt-scheduling-an-inner-reflections-guide
After many requests, I have decided to share this workflow that I use on my streams publicly. This is capable of the following....
-------------------------------------------------
Vid2Vid + Control Nets - Bypass these nodes when you don't want to use them and add any CN and preprocessors you need. The one's included are my go to's.
Latent Upscaling - When not Upscaling during testing, make sure to bypass every upscaling group and the very latent upscale video combine node.
A 2nd ControlNet pass during Latent Upscaling - Best practice is to match the same ControlNets you used in first pass with the same strength & weight
Multiple Image IPAdapter Integration - Do NOT bypass these nodes or things will break. Insert an image in each of the IPAdapter Image nodes on the very bottom and whjen not using the IPAdapter as a style or image reference, simply turn the weight and strength down to zero. This will essentially turn it off.
QR Code Illusion Renders - To do this, use a black and white alpha as your input video and use QR Code Monster as your only ControlNet.
-------------------------------------------------
This was built off of the base Vid2Vid workflow that was released by @Inner_Reflections_AI via the Civitai Article. The contributors to helping me with various parts of this workflow and getting it to the point its at are the following talented artists (their Instagram handles)...
@lightnlense
@pxl.pshr
@machine.delusions
@automatagraphics
@dotsimulate
Without their help, this would not have provided the many hours of video making enjoyment it has for me. I am not the most technically gifted person in this, so any input from the community or tweaks to further improve upon this would be greatly appreciated (that's really why I want to share it now)
There may be some node downloading needed, all of which should be accessible via the Comfy manager (I think). You can bypass any number of the LoRAs, ControlNets, and Upscaling as you need for what you are currently working on. Having intermediate to advanced knowledge of the nodes will help you mitigate with any of the errors you may get as you're turning things on and off, if you're a total beginner, I would recommend starting with @Inner_Reflections_AI base Vid2Vid workflow as it was the only way I was able to understand Comfy in the first place.
The zip file includes both a workflow .json file as well as a png that you can simply drop into your ComfyUI workspace to load everything. Be prepared to download a lot of Nodes via the ComfyUI manager.
Any issues or questions, I will be more than happy to attempt to help when I am free to do so 馃檪
If you make cool things with this, I would love for you to tag me on IG so I can share your creations. Also, a follow is greatly appreciated if you extract any value from this or my Vision Weaver GPT 馃檪 If you use it and like it? Leave me a dope review and throw me some stars!
@jboogx.creative
Description
Tweaked IPAdapter so that you can use multiple images and crop into different parts of the image to improve stylization.
FAQ
Comments (28)
Thank you for sharing your workflow first.
I have some questions:
- Why do you upload a video in your IPAdapter group that you convert to a mask...Where do you connect your Output Mask then?
- Why are you uploading 4 IPAdapter images? Why so many?
- Why are you suing a Zoe Depth Map for your QRCode Monster Preprocessor?
And if I understand, your Upscaler is not an upscaler but a parallel workflow where you chose different resolution for each of your ControlNet AND your Image OutPut.
Your results are amazing, that's why I try to understand.
THANK YOU SO MUCH!
1. The mask can connect to the attn mask input in the IPAdapter one level above. This can be used if you provide an alpha mask for your video and then the IPadapter will only apply the diffusion to the alpha mask.
2. Because sometime Is use 4 separate images to drive my results. Variety is the spice of life :p If I only use one, I put the same image in all.
3. This is actually not needed and I should have deleted it and just connected that IMAGE input directly to inputs in box 1a.
4. Yes. I'm running Controlnets again so I can diffuse high in the upscale and maintain the form of the 1st pass.
I am getting this error when loading the workflow:
"Loading aborted due to error reloading workflow data
This may be due to the following script:
/extensions/core/widgetInputs.js"
same issues, but I redownload the model, then it works, I think maybe was model has some error in download
I鈥檓 so stumped right now. For the last 24hrs I haven鈥檛 been able to work this out. I鈥檓 unable to install the ComfyUI-Advanced-ControlNet which was one of the required nodes. Everytime I try it says Import Failed when I check the command prompt during loading. Please advise
Thanks for sharing! After a bit of pain, managed to start to get decent output but temporal coherence remains an issue, as does limbs vanishing when in 'front' of the body. Any suggestions for either of these issues?
(For limbs, i've tried various CN combinations with very high weight values (openPose @1 for example), with no luck....) Thanks again!
Limbs in front I feel will always be issues, but I get great results using SOftedge + Linear and NOT OpenPose :) Preferences :p
@jboogx_creative聽ok thanks! Tried that and you're right works pretty well. Crazy that OP not always a benefit...
the zip file only contains a png file and some macos folder... do I miss something or did you? ;)
Ah! I missed it! Had to open the png in notepad++ to understand that the workflow is embedded in the png. awesome :)
where i can find JSON file?
Just download and drop the PNG into the ComfyUI page! The PNG is the workflow / json :)
Thanks for your sharing.
I Got this error. can you give me some advise?
Are you on a Mac m1? I have the same error and on a Mac
I think this is the error if you are attempting to use an SD 1.5 model w/ SDXL or vice versa? I could be wrong though. I'm sorry I can't give you an exact answer
@jboogx_creative聽I am using the same as you in the video. I downloaded all files and kept everything the same and still getting the error. Having looked up this issue, many are talking about a yaml file. Now i am not a tech by any means, but it might be worth installing it. Although i am going via RunDiff so i am not sure if its the same. I would literally pay a tech head to set this up for me at this point LOL
I seem to be getting out of memory (system RAM) problems at the last stage. Not sure if it's at VAE decode or at the final stage where it stitches the frames together. For processing something like 1000 frames at 540x960, what is the recommendation in terms of RAM? Does anyone know exactly which node needs so much RAM and if there are adjustments that can be made to reduce its usage?
any luck with this I keep having the same problem
Are you using the Face Restore node at the end right before the output or is it bypassed? Ends up that this particular node will cause this to happen. Took me a while to figure out. Not the ReActor node, the smaller FaceRestore model attached to the top of it.
I'd recommend leaving it bypassed or just deleting it completely and only using the ReActor Face Swap on the upscale.
This node works fine on the first pass though.
@boxvisualsent284 See if my reply above applies to you, Box 聽
Can you tell me what is the first step to learning. Or do you teach a class love? I am a step by step person
1. Visit the article I linked in the description!
2. Come to our Civitai Twitch streams every Thursday at 3pm PST and I go through AI video & Animation! Our twitch link is at the bottom of the website!
I also just included a link to a 40 minute YouTube walk through I created!
Hello! I'm getting an error when loading workflow. I have installed and updated all the IP adapter nodes and ComfyUI available in the manager. But I still have an unidentified element. Perhaps I can find this somewhere?
"When loading the graph, the following node types were not found:
IPAdapter
Nodes that have failed to load will show as red on the graph."
Ensure that you have installed IPAdapter_Plus and not just IPAdapter
@jboogx_creative聽聽I have custom ipadapter and ipadapter_plus nodes installed, but the node in the workflow still glows red.
Just simple question. How to make only preview video using your workflow. Let's say 50 frames or 30 frames.
The answer is Change Frame_load_cap to 50 or 30