CivArchive
    JBOOGX & MACHINE LEARNER ANIMATEDIFF WORKFLOW - Vid2Vid + ControlNet + Latent Upscale + Upscale ControlNet Pass + Multi Image IPAdapter + ReActor Face Swap - v3.0
    NSFW

    WELCOME TO THE JBOOGX & MACHINE LEARNER ANIMATEDIFF WORKFLOW!

    Full YouTube Walkthrough of the Workflow:



    1/8 UPDATE
    Added ReActor Face Swap to Low-Res & Upscaler. Added Bypass / Enable Toggle Switches using rgthree node pack.

    12/7 UPDATE
    About this version

    v2 brings a few quality of life changes and updates.

    I've separated all of the ControlNets into individual groups for you so that you can bypass the one's you don't want to use with ease.

    In the IPAdapter, please download and place this file in your comfyui\clip_vision directory. This is for the 'LOAD Clip Vision' node.

    https://drive.google.com/file/d/13KXx6u9JpHnWdemhqswRQJhVqThEE-7q/view?usp=sharing

    You can find the IPAdapter Plus 1.5 model for the 'LOAD IPAdapter Model' node here.

    https://github.com/cubiq/ComfyUI_IPAdapter_plus

    If you don't want to upscale, then bypass all Upscale groups on the bottom right.

    That should be it :)

    Please tag me in anything you make using the workflow and I will share on my social!

    @jboogx.creative on Instagram
    ---------------------------------------------
    DISCLAIMER: This is NOT beginner friendly. If you are a beginner, start with @Inner_Reflections_Ai vid2vid workflow that is linked here:

    https://civarchive.com/articles/2379/guide-comfyui-animatediff-guideworkflows-including-prompt-scheduling-an-inner-reflections-guide

    After many requests, I have decided to share this workflow that I use on my streams publicly. This is capable of the following....

    -------------------------------------------------

    1. Vid2Vid + Control Nets - Bypass these nodes when you don't want to use them and add any CN and preprocessors you need. The one's included are my go to's.

    2. Latent Upscaling - When not Upscaling during testing, make sure to bypass every upscaling group and the very latent upscale video combine node.

    3. A 2nd ControlNet pass during Latent Upscaling - Best practice is to match the same ControlNets you used in first pass with the same strength & weight

    4. Multiple Image IPAdapter Integration - Do NOT bypass these nodes or things will break. Insert an image in each of the IPAdapter Image nodes on the very bottom and whjen not using the IPAdapter as a style or image reference, simply turn the weight and strength down to zero. This will essentially turn it off.

    5. QR Code Illusion Renders - To do this, use a black and white alpha as your input video and use QR Code Monster as your only ControlNet.

      -------------------------------------------------

    This was built off of the base Vid2Vid workflow that was released by @Inner_Reflections_AI via the Civitai Article. The contributors to helping me with various parts of this workflow and getting it to the point its at are the following talented artists (their Instagram handles)...

    @lightnlense

    @pxl.pshr

    @machine.delusions

    @automatagraphics

    @dotsimulate

    Without their help, this would not have provided the many hours of video making enjoyment it has for me. I am not the most technically gifted person in this, so any input from the community or tweaks to further improve upon this would be greatly appreciated (that's really why I want to share it now)

    There may be some node downloading needed, all of which should be accessible via the Comfy manager (I think). You can bypass any number of the LoRAs, ControlNets, and Upscaling as you need for what you are currently working on. Having intermediate to advanced knowledge of the nodes will help you mitigate with any of the errors you may get as you're turning things on and off, if you're a total beginner, I would recommend starting with @Inner_Reflections_AI base Vid2Vid workflow as it was the only way I was able to understand Comfy in the first place.

    The zip file includes both a workflow .json file as well as a png that you can simply drop into your ComfyUI workspace to load everything. Be prepared to download a lot of Nodes via the ComfyUI manager.

    Any issues or questions, I will be more than happy to attempt to help when I am free to do so 🙂

    If you make cool things with this, I would love for you to tag me on IG so I can share your creations. Also, a follow is greatly appreciated if you extract any value from this or my Vision Weaver GPT 🙂 If you use it and like it? Leave me a dope review and throw me some stars!

    @jboogx.creative

    Description

    Added ReActor Face Swap to Low-Res & Upscaler. Added Bypass / Enable Toggle Switches using rgthree node pack.

    FAQ

    Comments (39)

    zimbora69Jan 8, 2024
    CivitAI

    I'm getting the following error message after installing ReActorFaceSwap. I used the Manager extension to download that missing dependency.


    When loading the graph, the following node types were not found:

    ReActorFaceSwap

    Nodes that have failed to load will show as red on the graph.




    Thank for sharing this tho!

    jboogx_creative
    Author
    Jan 8, 2024· 2 reactions

    Ahhh something I should have mentioned, reactor is a bit finicky with installation. I had to uninstall, disable, and reinstall / restart comfy quite a few times before it finally installed. Happened to friends as well. No way around it

    unreal_unitJan 9, 2024

    I did this about 8 times and it still didn't work(

    unreal_unitJan 9, 2024· 1 reaction

    Solution ↓↓↓↓↓↓↓↓↓
    https://github.com/Gourieff/comfyui-reactor-node/issues/141

    phil62Jan 9, 2024· 3 reactions
    CivitAI

    Hey thanks for this awesome Workflow!! for sd 1.5 its working perfect. Has anyone already gotten the workflow to work with SDXL? I haven't gotten any good results for this yet

    jboogx_creative
    Author
    Jan 9, 2024· 1 reaction

    I personally don't use SDXL much or have put much time into it because I prefer 1.5 for animations. If you check out Inner_Reflections_AI account on here, she has an SDXL workflow that you should be able to read the article and adapt to my workflow!

    t437Jan 9, 2024
    CivitAI

    After downloading the Clip Vision from drive, it complains about it being an invalid zip. Anyone else having that issue?

    jboogx_creative
    Author
    Jan 10, 2024

    Really? Can anyone else confirm this?

    jboogx_creative
    Author
    Jan 10, 2024

    I just had a friend test this and confirmed that it worked fine for him? :?

    t437Jan 10, 2024· 1 reaction

    It seemed to be a issue with the extraction programs on my mac. worked with 7zip

    jboogx_creative
    Author
    Jan 11, 2024

    @t437 Glad to hear!

    auth_powerJan 17, 2024· 1 reaction

    Use command line unzip utility and works fine. Not working with OS X GUI unzipper.

    DamDam_Jan 13, 2024· 1 reaction
    CivitAI

    Is this workflow compatible with Mac M1?

    jboogx_creative
    Author
    Jan 17, 2024

    I've only used AnimateDiff and Comfy on PC. So, only one way to find out >_<

    emmahobbins1879Feb 3, 2024

    I am on a Mac M1 and so far it’s not working, getting the same error as the latest comment

    d3adcatMar 16, 2024

    I got it working but freeU makes it fall back to CPU. don't know if freeU isn't developed for MPS yet or if the error is somewhere else. Anyways I'm getting about 1h waiting time for a lo-res 2s video. not good :/

    shaiaiJan 23, 2024
    CivitAI

    I finally got this working!!! (almost)

    Does anyone know what model should be in the model_name for FaceRestoreModelLoader in 7. Low-Res ReActor Face Swap?

    Theres nothing there by default (null) and no options either...

    shaiaiJan 23, 2024· 4 reactions

    Found it. I downloaded the GFPGANv1.4 https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth

    and placed in portable > ComfyUI > models > facerestore_models

    StephanTualJan 26, 2024· 1 reaction
    CivitAI

    Great stuff. I'd love to learn more about your choices for parameters and models used, something the YT video doesn't mention much (at all). I know it's very difficult as people downloading wfs are usually newbies, and content is tough, but it would be nice to understand some of your choices :) in any case, 5* , briliant work and good fun when mixed with SVD (i'll try to post that soonish)

    NY1Jan 26, 2024
    CivitAI

    i cant fix the :
    "When loading the graph, the following node types were not found:

    ReActor Node for ComfyUI 🔗

    Nodes that have failed to load will show as red on the graph." help?

    DotCom_ArtworkJan 31, 2024· 1 reaction
    CivitAI

    where I can find how to add also the video Mask to the video input node? in your video tutorial I see that the node has two button of 'Choose video to upload', One for the video, One for the video mask.

    jboogx_creative
    Author
    Jan 31, 2024

    So, I haven't exactly done this yet and a few things would have to be configured. I generally do a masked run separate and then put it all together in post so that I have more total control over my outcome.

    DotCom_ArtworkJan 31, 2024

    @jboogx_creative First of all, thank you for your reply,
    I had assumed that you were running 2 runs watching your videos, but I wasn't sure, however with this information you gave me, a question arises, how do I render a video with the mask applied? leaving the background black?
    so that in post-production I can easily make it transparent?

    RagingHorseFeb 1, 2024· 1 reaction
    CivitAI

    I get the following message in the Comfy UI, I am not sure what to do.

    If anyone could help that would be amazing!

    Error occurred when executing IPAdapterApplyEncoded: Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]). File "D:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 654, in apply_ipadapter self.ipadapter = IPAdapter( ^^^^^^^^^^ File "D:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 279, in init self.image_proj_model.load_state_dict(ipadapter_model["image_proj"]) File "D:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

    emmaaistudio646Feb 3, 2024

    I get this as well, make sure you have everything loaded. I am not sure those models are installed. I opend the workflow and have no missing nodes errors....a little to good to be true.

    AnNye0ngFeb 14, 2024· 1 reaction
    CivitAI

    Is it normal that doing a 512x512 video with open pose and depth overflows my 12GB vram?? on sd1.5 (low res, no upscale)

    Sounds crazy to me, am i doing something wrong?

    jboogx_creative
    Author
    Feb 14, 2024

    How many ControlNets are you running? When I run two COntrolNets on 512x896 it will use about half of my 24gb

    AnNye0ngFeb 14, 2024· 1 reaction

    @jboogx_creative Oh wow, I didn't know that control nets used that much VRAM. It all makes sense now!
    Curse the 12gb 4070ti! at least its fast haha

    So, do you have any tips or opinions on when to use what ControlNet? I've been using depth + open pose it works fine, adding one of the other 3 seems to make no difference or make it blurry. Do you have a preference? Does the input matter for which ones you use?

    Thanks for your time!

    jboogx_creative
    Author
    Feb 15, 2024· 2 reactions

    @lamortcriss753 These are my combinations....

    1. LineArt + SoftEdge for getting the motion as close to accurate of the original footage. This will keep AnimateDiff's creativity to a minimum

    2. Depth + LineArt for semi loose while still following movements pretty well and allow AD to get a little wild

    3. OpenPose + Depth for maximum creativity and abstracted while loosely adhering to movement.

    It's all just a style preference and choice you will have to make based on the video you're making :)

    mmmgreen90266Mar 3, 2024
    CivitAI

    Can somebody help me with error?

    I am new here and dont even know where to search

    Error occurred when executing KSampler: 'NoneType' object has no attribute 'to' File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1368, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1338, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 346, in motion_sample latents = wrap_function_to_inject_xformers_bug_info(orig_comfy_sample)(model, noise, args, *kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py", line 360, in wrapped_function return function_to_wrap(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\__init__.py", line 130, in KSampler_sample return KSamplersample(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 700, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\__init__.py", line 149, in sample return sample(*args, kwargs) File "D:\stabledif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 605, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 544, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 613, in sample_dpmpp_2m_sde denoised = model(x, sigmas[i] s_in, *extra_args) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in wrappedcall_impl return self._call_impl(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in callimpl return forward_call(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 282, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in wrappedcall_impl return self._call_impl(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in callimpl return forward_call(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in forward return self.apply_model(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 1012, in apply_model out = super().apply_model(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 269, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 385, in evolved_sampling_function cond_pred, uncond_pred = sliding_calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 494, in sliding_calc_cond_uncond_batch sub_cond_out, sub_uncond_out = comfy.samplers.calc_cond_uncond_batch(model, sub_cond, sub_uncond, sub_x, sub_timestep, model_options) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 197, in calc_cond_uncond_batch c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond)) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 468, in get_control_inject return self.get_control_advanced(x_noisy, t, cond, batched_number) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 32, in get_control_advanced return self.sliding_get_control(x_noisy, t, cond, batched_number) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 37, in sliding_get_control control_prev = self.previous_controlnet.get_control(x_noisy, t, cond, batched_number) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 468, in get_control_inject return self.get_control_advanced(x_noisy, t, cond, batched_number) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 32, in get_control_advanced return self.sliding_get_control(x_noisy, t, cond, batched_number) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 78, in sliding_get_control control = self.control_model(x=x_noisy.to(dtype), hint=self.cond_hint, timesteps=timestep.float(), context=context.to(dtype), y=y) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in wrappedcall_impl return self._call_impl(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in callimpl return forward_call(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\cldm\cldm.py", line 308, in forward h = self.middle_block(h, emb, context) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in wrappedcall_impl return self._call_impl(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in callimpl return forward_call(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 59, in forward return forward_timestep_embed(self, args, *kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 94, in forward_timestep_embed x = layer(x, emb) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in wrappedcall_impl return self._call_impl(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in callimpl return forward_call(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 229, in forward return checkpoint( File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint return func(*inputs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 242, in forward h = self.inlayers(x) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in wrappedcall_impl return self._call_impl(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in callimpl return forward_call(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in wrappedcall_impl return self._call_impl(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in callimpl return forward_call(*args, **kwargs) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 251, in forward weight, bias = comfy.ops.cast_bias_weight(self, input) File "D:\stable_dif\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 27, in cast_bias_weight weight = s.weight.to(device=input.device, dtype=input.dtype, non_blocking=non_blocking)

    cbevil933Mar 8, 2024
    CivitAI

    has this workflow been taken offline and its just a big PNG image file now that i have to rebuild? or does this workflow exsist somewhere and am being stupid about how to get it???

    help mucho appreciated

    royalcreedMar 9, 2024

    I also encountered this problem, have you solved it?

    cbevil933Mar 11, 2024

    @royalcreed nope... hoping someone offers a helping hand. fingers crossed ;)

    PixelAlchemistMar 14, 2024· 1 reaction

    With the newest ComfyUI Version you actually can use the png file as workflow import

    cbevil933Mar 15, 2024· 1 reaction

    @ben367 WHHHHAAATTT IS THIS VOODOO MAGICKK!!! AMAZING! thank you for helping!!

    jboogx_creative
    Author
    Mar 16, 2024

    @cbevil933 haha glad you worked it out XD

    neilvinsernMar 22, 2024

    without a json workflow, i still can't get the png file into comfyui ?

    XcomMar 23, 2024

    @neilvinsern Maybe update your comfy

    Workflows
    SD 1.5

    Details

    Downloads
    4,991
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/8/2024
    Updated
    5/16/2026
    Deleted
    -

    Files

    jboogxMACHINELEARNERANIMATEDIFFWORKFLOW_v30.zip