UPDATE: There is now a WAN 2.2 version of this workflow, which can be found here: https://civarchive.com/models/2194801/simple-wan-22-i2v-60-fps-video-comfyui-workflow-6-step-lightning-with-3-ksamplers-and-upscale
I recently got into WAN image to video and took inspiration from a whole host of different workflows and posts on this site and others. I ended up creating this tidy, short, and relatively simple workflow that allows you to perform i2v generation and then interpolate up to higher frame rates.
Please tag this workflow in any videos you upload here. I don't care about attribution, I'm just really interested to see what others make using the tool!
It was useful to me, and it's what I've used to generate all of my videos so far, so I thought I'd share it.
Happy to have feedback!
Check out my recently uploaded text2image workflow that I used to generate my initial images for this i2v one: https://civarchive.com/models/1764077

Description
FAQ
Comments (33)
I have no idea what he doesn't like.
# ComfyUI Error Report ## Error Details - **Node ID:** 3 - **Node Type:** KSampler - **Exception Type:** AttributeError - **Exception Message:** 'NoneType' object has no attribute 'get_model_object' ## Stack Trace ``` File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1525, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1478, in common_ksampler latent_image = comfy.sample.fix_empty_latent_channels(model, latent_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\comfy\sample.py", line 28, in fix_empty_latent_channels latent_format = model.get_model_object("latent_format") #Resize the empty latent image so it has the right number of channels ^^^^^^^^^^^^^^^^^^^^^^ ``` ## System Information - **ComfyUI Version:** 0.3.34 - **Arguments:** ComfyUI\main.py --windows-standalone-build - **OS:** nt - **Python Version:** 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)] - **Embedded Python:** true - **PyTorch Version:** 2.7.0+cu128 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 8585216000 - **VRAM Free:** 4366109658 - **Torch VRAM Total:** 3087007744 - **Torch VRAM Free:** 53316570 ## Logs ``` 2025-07-10T07:08:47.467506 - MultiGPU: Checking for module at F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-GGUF 2025-07-10T07:08:47.467506 - MultiGPU: Found ComfyUI-GGUF, creating compatible MultiGPU nodes 2025-07-10T07:08:47.467506 - MultiGPU: Checking for module at F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\PuLID_ComfyUI 2025-07-10T07:08:47.467506 - MultiGPU: Found PuLID_ComfyUI, creating compatible MultiGPU nodes 2025-07-10T07:08:47.467506 - MultiGPU: Checking for module at F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper 2025-07-10T07:08:47.467506 - MultiGPU: Module ComfyUI-HunyuanVideoWrapper not found - skipping 2025-07-10T07:08:47.467506 - MultiGPU: Checking for module at F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-hunyuanvideowrapper 2025-07-10T07:08:47.468474 - MultiGPU: Module comfyui-hunyuanvideowrapper not found - skipping 2025-07-10T07:08:47.468474 - MultiGPU: Checking for module at F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper 2025-07-10T07:08:47.468474 - MultiGPU: Found ComfyUI-WanVideoWrapper, creating compatible MultiGPU nodes 2025-07-10T07:08:47.468474 - MultiGPU: Registration complete. Final mappings: DeviceSelectorMultiGPU, HunyuanVideoEmbeddingsAdapter, MergeFluxLoRAsQuantizeAndLoaddMultiGPU, UNETLoaderMultiGPU, VAELoaderMultiGPU, CLIPLoaderMultiGPU, DualCLIPLoaderMultiGPU, TripleCLIPLoaderMultiGPU, QuadrupleCLIPLoaderMultiGPU, CLIPVisionLoaderMultiGPU, CheckpointLoaderSimpleMultiGPU, ControlNetLoaderMultiGPU, LTXVLoaderMultiGPU, Florence2ModelLoaderMultiGPU, DownloadAndLoadFlorence2ModelMultiGPU, LoadFluxControlNetMultiGPU, UnetLoaderGGUFMultiGPU, UnetLoaderGGUFDisTorchMultiGPU, UnetLoaderGGUFAdvancedMultiGPU, UnetLoaderGGUFAdvancedDisTorchMultiGPU, CLIPLoaderGGUFMultiGPU, CLIPLoaderGGUFDisTorchMultiGPU, DualCLIPLoaderGGUFMultiGPU, DualCLIPLoaderGGUFDisTorchMultiGPU, TripleCLIPLoaderGGUFMultiGPU, TripleCLIPLoaderGGUFDisTorchMultiGPU, QuadrupleCLIPLoaderGGUFMultiGPU, QuadrupleCLIPLoaderGGUFDisTorchMultiGPU, PulidModelLoaderMultiGPU, PulidInsightFaceLoaderMultiGPU, PulidEvaClipLoaderMultiGPU, WanVideoModelLoaderMultiGPU, WanVideoVAELoaderMultiGPU, LoadWanVideoT5TextEncoderMultiGPU 2025-07-10T07:08:47.948984 - F:\AI\ComfyUI-StableDif-t27-p312-cu128\python_embeded\Lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) 2025-07-10T07:08:47.952236 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-07-10T07:08:47.952236 - 2025-07-10T07:08:48.047380 - Failed to auto update `Quality of Life Suit` 2025-07-10T07:08:48.047380 - 2025-07-10T07:08:48.048667 - [33mQualityOfLifeSuit_Omar92_DIR:[0m F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-QualityOfLifeSuit_Omar922025-07-10T07:08:48.048667 - 2025-07-10T07:08:48.057381 - [0;33m[ReActor][0m - [38;5;173mSTATUS[0m - [0;32mRunning v0.6.0-a1 in ComfyUI[0m2025-07-10T07:08:48.057381 - 2025-07-10T07:08:48.156976 - Torch version: 2.7.0+cu1282025-07-10T07:08:48.156976 - 2025-07-10T07:08:48.186809 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json 2025-07-10T07:08:48.241071 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json 2025-07-10T07:08:48.418739 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json 2025-07-10T07:08:48.577142 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 2025-07-10T07:08:48.859120 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json 2025-07-10T07:08:49.704216 - Traceback (most recent call last): File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 2131, in load_custom_node module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 999, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-Upscaler-Tensorrt\__init__.py", line 6, in <module> from .trt_utilities import Engine File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-Upscaler-Tensorrt\trt_utilities.py", line 32, in <module> import tensorrt as trt ModuleNotFoundError: No module named 'tensorrt' 2025-07-10T07:08:49.704904 - Cannot import F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-Upscaler-Tensorrt module for custom nodes: No module named 'tensorrt' 2025-07-10T07:08:50.094806 - __ ______ ____ ____ ____ _ _ \ \ / / _ \ / ___| __ _ _ __ ___ ___| _ \ _____ __/ ___(_)_ __| | \ \ / /| |_) | | _ / _` | '_ ` _ \ / _ \ | | |/ _ \ \ / / | _| | '__| | \ V / | _ <| |_| | (_| | | | | | | __/ |_| | __/\ V /| |_| | | | | | \_/ |_| \_\\____|\__,_|_| |_| |_|\___|____/ \___| \_/ \____|_|_| |_| 🎮 VRGameDevGirl custom nodes loaded successfully! 🎞️ 2025-07-10T07:08:50.095807 - 2025-07-10T07:08:50.778846 - Warning: Could not load sageattention: No module named 'sageattention'2025-07-10T07:08:50.778846 - 2025-07-10T07:08:50.778846 - sageattention package is not installed2025-07-10T07:08:50.778846 - 2025-07-10T07:08:50.829046 - ------------------------------------------2025-07-10T07:08:50.829394 - 2025-07-10T07:08:50.829394 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-07-10T07:08:50.829394 - 2025-07-10T07:08:50.829394 - ------------------------------------------2025-07-10T07:08:50.829394 - 2025-07-10T07:08:50.829394 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-07-10T07:08:50.829394 - 2025-07-10T07:08:50.829394 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-07-10T07:08:50.829394 - 2025-07-10T07:08:50.829394 - ------------------------------------------2025-07-10T07:08:50.829394 - 2025-07-10T07:08:50.834785 - [36;20m[F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ckpts path: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts[0m 2025-07-10T07:08:50.834785 - [36;20m[F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m 2025-07-10T07:08:50.834785 - [36;20m[F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m 2025-07-10T07:08:51.592026 - [93m[DeepLXTranslateNode] [94mRunning server DeepLX...[0m2025-07-10T07:08:51.592026 - 2025-07-10T07:08:53.594640 - [93m[DeepLXTranslateNode] [94mChecking DeepLX server. Attempt 10...[0m2025-07-10T07:08:53.594640 - 2025-07-10T07:08:53.614373 - [93m[DeepLXTranslateNode][92m Server responded successfully: [0m2025-07-10T07:08:53.614832 - 2025-07-10T07:08:53.614973 - {"code":200,"message":"DeepL Free API, Developed by sjlleo and missuo. Go to /translate with POST. http://github.com/OwO-Network/DeepLX"}2025-07-10T07:08:53.615057 - 2025-07-10T07:08:53.711314 - FETCH ComfyRegistry Data: 5/912025-07-10T07:08:53.712332 - 2025-07-10T07:08:53.788610 - [1;35m ### [START] ComfyUI AlekPet Nodes [1;34mv1.0.66[0m[1;35m ###[0m2025-07-10T07:08:53.789113 - 2025-07-10T07:08:53.789113 - [92mNode -> ArgosTranslateNode: [93mArgosTranslateCLIPTextEncodeNode, ArgosTranslateTextNode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.789113 - 2025-07-10T07:08:53.789113 - [92mNode -> ChatGLMNode: [93mChatGLM4TranslateCLIPTextEncodeNode, ChatGLM4TranslateTextNode, ChatGLM4InstructNode, ChatGLM4InstructMediaNode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.789113 - 2025-07-10T07:08:53.789113 - [92mNode -> DeepLXTranslateNode: [93mDeepLXTranslateCLIPTextEncodeNode, DeepLXTranslateTextNode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.790115 - 2025-07-10T07:08:53.790115 - [92mNode -> DeepTranslatorNode: [93mDeepTranslatorCLIPTextEncodeNode, DeepTranslatorTextNode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.790115 - 2025-07-10T07:08:53.790115 - [92mNode -> ExtrasNode: [93mPreviewTextNode, HexToHueNode, ColorsCorrectNode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.790411 - 2025-07-10T07:08:53.790411 - [92mNode -> GoogleTranslateNode: [93mGoogleTranslateCLIPTextEncodeNode, GoogleTranslateTextNode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.790411 - 2025-07-10T07:08:53.790411 - [92mNode -> IDENode: [93mIDENode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.790789 - 2025-07-10T07:08:53.790789 - [92mNode -> PainterNode: [93mPainterNode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.790789 - 2025-07-10T07:08:53.790789 - [92mNode -> PoseNode: [93mPoseNode[0m [92m[92m[Loading][0m[0m2025-07-10T07:08:53.790789 - 2025-07-10T07:08:53.790789 - [1;35m### [END] ComfyUI AlekPet Nodes ###[0m2025-07-10T07:08:53.790789 - 2025-07-10T07:08:53.803694 - [34mFizzleDorf Custom Nodes: [92mLoaded[0m2025-07-10T07:08:53.803694 - 2025-07-10T07:08:53.818085 - # 😺dzNodes: LayerStyle -> [1;33mCannot import name 'guidedFilter' from 'cv2.ximgproc' A few nodes cannot works properly, while most nodes are not affected. Please REINSTALL package 'opencv-contrib-python'. For detail refer to [4mhttps://github.com/chflame163/ComfyUI_LayerStyle/issues/5[0m[m2025-07-10T07:08:53.818085 - 2025-07-10T07:08:53.942639 - # 😺dzNodes: LayerStyle -> [1;33mCannot import name 'guidedFilter' from 'cv2.ximgproc' A few nodes cannot works properly, while most nodes are not affected. Please REINSTALL package 'opencv-contrib-python'. For detail refer to [4mhttps://github.com/chflame163/ComfyUI_LayerStyle/issues/5[0m[m2025-07-10T07:08:53.942639 - 2025-07-10T07:08:54.012266 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-07-10T07:08:54.012697 - 2025-07-10T07:08:54.078124 - Traceback (most recent call last): File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_Searge_LLM\Searge_LLM_Node.py", line 13, in <module> Llama = importlib.import_module("llama_cpp_cuda").Llama ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "importlib\__init__.py", line 90, in import_module File "<frozen importlib._bootstrap>", line 1387, in _gcd_import File "<frozen importlib._bootstrap>", line 1360, in _find_and_load File "<frozen importlib._bootstrap>", line 1324, in _find_and_load_unlocked ModuleNotFoundError: No module named 'llama_cpp_cuda' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 2131, in load_custom_node module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 999, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_Searge_LLM\__init__.py", line 1, in <module> from .Searge_LLM_Node import * File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_Searge_LLM\Searge_LLM_Node.py", line 15, in <module> Llama = importlib.import_module("llama_cpp").Llama ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "importlib\__init__.py", line 90, in import_module ModuleNotFoundError: No module named 'llama_cpp' 2025-07-10T07:08:54.078124 - Cannot import F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_Searge_LLM module for custom nodes: No module named 'llama_cpp' 2025-07-10T07:08:54.095463 - [36mEfficiency Nodes:[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...[92mSuccess![0m2025-07-10T07:08:54.095463 - 2025-07-10T07:08:55.576119 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-07-10T07:08:55.576119 - 2025-07-10T07:08:55.657139 - 2025-07-10T07:08:55.657934 - [92m[rgthree-comfy] Loaded 42 exciting nodes. 🎉[00m2025-07-10T07:08:55.657934 - 2025-07-10T07:08:55.657934 - 2025-07-10T07:08:55.905146 - [34mWAS Node Suite: [0mBlenderNeko's Advanced CLIP Text Encode found, attempting to enable `CLIPTextEncode` support.[0m2025-07-10T07:08:55.906147 - 2025-07-10T07:08:55.908146 - [34mWAS Node Suite: [0m`CLIPTextEncode (BlenderNeko Advanced + NSP)` node enabled under `WAS Suite/Conditioning` menu.[0m2025-07-10T07:08:55.908580 - 2025-07-10T07:08:56.360392 - [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m2025-07-10T07:08:56.360392 - 2025-07-10T07:08:56.360392 - [34mWAS Node Suite: [0m`ffmpeg_bin_path` is set to: .\ffmpeg[0m2025-07-10T07:08:56.360392 - 2025-07-10T07:08:56.811357 - [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m221[0m [32mnodes successfully.[0m2025-07-10T07:08:56.812384 - 2025-07-10T07:08:56.812384 - [3m[93m"The only person you should try to be better than is the person you were yesterday."[0m[3m - Unknown[0m 2025-07-10T07:08:56.812384 - 2025-07-10T07:08:56.846469 - Import times for custom nodes: 2025-07-10T07:08:56.847469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\websocket_image_save.py 2025-07-10T07:08:56.847469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_bnb_nf4_fp4_Loaders 2025-07-10T07:08:56.847469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_memory_cleanup 2025-07-10T07:08:56.847469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-image-selector 2025-07-10T07:08:56.847469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\wanblockswap 2025-07-10T07:08:56.847469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_pixtral_vision 2025-07-10T07:08:56.847469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\tiled_ksampler 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_pixtral_large 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_AdvancedRefluxControl 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-mxtoolkit 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\sdxl_prompt_styler 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_ttp_toolset 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\cg-use-everywhere 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-detail-daemon 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-image-round 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-YOLO 2025-07-10T07:08:56.848469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-fbcnn 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-vrgamedevgirl 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-FreeMemory 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\stability-ComfyUI-nodes 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-inpaint-cropandstitch 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-WanStartEndFramesNative 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_JPS-Nodes 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-Notifications 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_instantid 2025-07-10T07:08:56.849470 - 0.0 seconds (IMPORT FAILED): F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-HPSv2-Nodes 2025-07-10T07:08:56.849470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-MaxedOut 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_ipadapter_plus 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-inpaint-nodes 2025-07-10T07:08:56.850469 - 0.0 seconds (IMPORT FAILED): F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_Searge_LLM 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-GGUF 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\Comfyui-StableSR 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-QualityOfLifeSuit_Omar92 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-multigpu 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\teacache 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_fizznodes 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_essentials 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-image-saver 2025-07-10T07:08:56.850469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-various 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfy-image-saver 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\efficiency-nodes-comfyui 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-custom-scripts 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_ultimatesdupscale 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-FramePackWrapper 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-florence2 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-OreX 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-frame-interpolation 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\x-flux-comfyui 2025-07-10T07:08:56.851469 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-kjnodes 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-LTXVideo 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\rgthree-comfy 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\wavespeed 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-segment-anything-2 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_controlnet_aux 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-faceless-node 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\facerestore_cf 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\gguf 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-inspire-pack 2025-07-10T07:08:56.852470 - 0.0 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-nunchaku 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-impact-pack 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_LayerStyle_Advance 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_pulid_flux_ll 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-MingNodes 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-Crystools 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\pulid_comfyui 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-impact-subpack 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI_LayerStyle 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-MediaMixer 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-reactor-node 2025-07-10T07:08:56.853470 - 0.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-Manager 2025-07-10T07:08:56.854469 - 0.2 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-art-venture 2025-07-10T07:08:56.854469 - 0.2 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-ollama 2025-07-10T07:08:56.854469 - 0.3 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced 2025-07-10T07:08:56.854469 - 0.4 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-videohelpersuite 2025-07-10T07:08:56.854469 - 0.4 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-hunyan3dwrapper 2025-07-10T07:08:56.854469 - 0.5 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-Addoor 2025-07-10T07:08:56.854469 - 0.7 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\bringing-old-photos-back-to-life 2025-07-10T07:08:56.854469 - 0.7 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-SUPIR 2025-07-10T07:08:56.854469 - 0.7 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-easy-use 2025-07-10T07:08:56.854469 - 0.7 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper 2025-07-10T07:08:56.854469 - 0.8 seconds (IMPORT FAILED): F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\ComfyUI-Upscaler-Tensorrt 2025-07-10T07:08:56.855469 - 1.1 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\was-node-suite-comfyui 2025-07-10T07:08:56.855469 - 1.4 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\flow2-wan-video 2025-07-10T07:08:56.855469 - 2.9 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui_custom_nodes_alekpet 2025-07-10T07:08:56.855469 - 3.9 seconds: F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfy-mtb 2025-07-10T07:08:56.855469 - 2025-07-10T07:08:56.894986 - Loaded usage statistics from F:/AI/ComfyUI-StableDif-t27-p312-cu128/ComfyUI/models/loras\lora_manager_stats.json 2025-07-10T07:08:56.895189 - Usage statistics tracker initialized 2025-07-10T07:08:56.896260 - Metadata collection hooks installed for runtime values2025-07-10T07:08:56.896529 - 2025-07-10T07:08:56.896529 - ComfyUI Metadata Collector initialized2025-07-10T07:08:56.896529 - 2025-07-10T07:08:56.896529 - LoRA Manager: All services initialized and background tasks scheduled 2025-07-10T07:08:56.896529 - Starting server 2025-07-10T07:08:56.897531 - Cache files disabled for lora, skipping load 2025-07-10T07:08:56.898531 - Cache files disabled for checkpoint, skipping load 2025-07-10T07:08:56.900531 - To see the GUI go to: http://127.0.0.1:8188 2025-07-10T07:08:57.029979 - Recipe cache initialized in 0.13 seconds. Found 0 recipes 2025-07-10T07:08:57.029979 - Lora cache initialized in 0.13 seconds. Found 5 models 2025-07-10T07:08:57.045979 - Checkpoint cache initialized in 0.02 seconds. Found 32 models 2025-07-10T07:08:57.827186 - FETCH ComfyRegistry Data: 10/912025-07-10T07:08:57.827186 - 2025-07-10T07:08:59.170156 - [33mQualityOfLifeSuit_Omar92:[0m:NSP ready2025-07-10T07:08:59.170156 - 2025-07-10T07:08:59.218110 - []2025-07-10T07:08:59.218110 - 2025-07-10T07:08:59.218110 - []2025-07-10T07:08:59.218110 - 2025-07-10T07:09:02.146437 - FETCH ComfyRegistry Data: 15/912025-07-10T07:09:02.147438 - 2025-07-10T07:09:06.344719 - FETCH ComfyRegistry Data: 20/912025-07-10T07:09:06.344719 - 2025-07-10T07:09:11.358160 - FETCH ComfyRegistry Data: 25/912025-07-10T07:09:11.358160 - 2025-07-10T07:09:16.675647 - FETCH ComfyRegistry Data: 30/912025-07-10T07:09:16.676648 - 2025-07-10T07:09:23.223181 - FETCH ComfyRegistry Data: 35/912025-07-10T07:09:23.223181 - 2025-07-10T07:09:27.716536 - FETCH ComfyRegistry Data: 40/912025-07-10T07:09:27.716536 - 2025-07-10T07:09:32.716736 - FETCH ComfyRegistry Data: 45/912025-07-10T07:09:32.716736 - 2025-07-10T07:09:36.787713 - FETCH ComfyRegistry Data: 50/912025-07-10T07:09:36.787713 - 2025-07-10T07:09:45.254043 - FETCH ComfyRegistry Data: 55/912025-07-10T07:09:45.254043 - 2025-07-10T07:09:46.738544 - got prompt 2025-07-10T07:09:46.990550 - Using xformers attention in VAE 2025-07-10T07:09:46.991542 - Using xformers attention in VAE 2025-07-10T07:09:47.251544 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 2025-07-10T07:09:48.345521 - Requested to load CLIPVisionModelProjection 2025-07-10T07:09:48.814526 - loaded completely 5679.8 1208.09814453125 True 2025-07-10T07:09:49.301055 - Using scaled fp8: fp8 matrix mult: False, scale input: False 2025-07-10T07:09:49.816291 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 2025-07-10T07:09:49.980013 - FETCH ComfyRegistry Data: 60/912025-07-10T07:09:49.980985 - 2025-07-10T07:09:54.491206 - FETCH ComfyRegistry Data: 65/912025-07-10T07:09:54.492206 - 2025-07-10T07:09:57.041999 - SELECTED: input12025-07-10T07:09:57.041999 - 2025-07-10T07:09:57.052544 - ImpactSwitch: invalid select index (ignored)2025-07-10T07:09:57.053544 - 2025-07-10T07:09:57.078605 - Requested to load WanTEModel 2025-07-10T07:09:59.073091 - loaded partially 5637.675 5635.477016448975 0 2025-07-10T07:09:59.766488 - FETCH ComfyRegistry Data: 70/912025-07-10T07:09:59.767488 - 2025-07-10T07:10:00.356022 - loaded partially 5637.675 5635.477016448975 0 2025-07-10T07:10:01.282638 - Requested to load WanVAE 2025-07-10T07:10:02.341897 - loaded completely 298.24993896484375 242.02829551696777 True 2025-07-10T07:10:03.247564 - !!! Exception during processing !!! 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:10:03.280405 - Traceback (most recent call last): File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1525, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1478, in common_ksampler latent_image = comfy.sample.fix_empty_latent_channels(model, latent_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\comfy\sample.py", line 28, in fix_empty_latent_channels latent_format = model.get_model_object("latent_format") #Resize the empty latent image so it has the right number of channels ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:10:03.286886 - Prompt executed in 16.46 seconds 2025-07-10T07:10:04.635274 - FETCH ComfyRegistry Data: 75/912025-07-10T07:10:04.636278 - 2025-07-10T07:10:09.763354 - FETCH ComfyRegistry Data: 80/912025-07-10T07:10:09.763354 - 2025-07-10T07:10:19.761272 - FETCH ComfyRegistry Data: 85/912025-07-10T07:10:19.761272 - 2025-07-10T07:10:24.511460 - FETCH ComfyRegistry Data: 90/912025-07-10T07:10:24.511460 - 2025-07-10T07:10:25.753394 - FETCH ComfyRegistry Data [DONE]2025-07-10T07:10:25.753394 - 2025-07-10T07:10:25.876393 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes 2025-07-10T07:10:25.961572 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-07-10T07:10:25.961572 - 2025-07-10T07:10:26.672081 - [DONE]2025-07-10T07:10:26.672081 - 2025-07-10T07:10:26.719999 - [ComfyUI-Manager] All startup tasks have been completed. 2025-07-10T07:12:17.512347 - got prompt 2025-07-10T07:12:17.630152 - !!! Exception during processing !!! 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:12:17.631153 - Traceback (most recent call last): File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1525, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1478, in common_ksampler latent_image = comfy.sample.fix_empty_latent_channels(model, latent_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\comfy\sample.py", line 28, in fix_empty_latent_channels latent_format = model.get_model_object("latent_format") #Resize the empty latent image so it has the right number of channels ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:12:17.633153 - Prompt executed in 0.03 seconds 2025-07-10T07:14:28.919136 - got prompt 2025-07-10T07:14:29.017043 - !!! Exception during processing !!! 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:14:29.019043 - Traceback (most recent call last): File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1525, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1478, in common_ksampler latent_image = comfy.sample.fix_empty_latent_channels(model, latent_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\comfy\sample.py", line 28, in fix_empty_latent_channels latent_format = model.get_model_object("latent_format") #Resize the empty latent image so it has the right number of channels ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:14:29.021043 - Prompt executed in 0.03 seconds 2025-07-10T07:14:50.849833 - got prompt 2025-07-10T07:14:50.962966 - !!! Exception during processing !!! 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:14:50.964968 - Traceback (most recent call last): File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1525, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1478, in common_ksampler latent_image = comfy.sample.fix_empty_latent_channels(model, latent_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\comfy\sample.py", line 28, in fix_empty_latent_channels latent_format = model.get_model_object("latent_format") #Resize the empty latent image so it has the right number of channels ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:14:50.966968 - Prompt executed in 0.05 seconds 2025-07-10T07:18:49.759540 - got prompt 2025-07-10T07:18:49.857662 - !!! Exception during processing !!! 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:18:49.859635 - Traceback (most recent call last): File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 349, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 224, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 63, in map_node_over_list_with_metadata results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 196, in _map_node_over_list process_inputs(input_dict, i) File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\execution.py", line 185, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1525, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\nodes.py", line 1478, in common_ksampler latent_image = comfy.sample.fix_empty_latent_channels(model, latent_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\ComfyUI-StableDif-t27-p312-cu128\ComfyUI\comfy\sample.py", line 28, in fix_empty_latent_channels latent_format = model.get_model_object("latent_format") #Resize the empty latent image so it has the right number of channels ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'get_model_object' 2025-07-10T07:18:49.860547 - Prompt executed in 0.02 seconds ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here)Do you have ComfyUI manager installed with your ComfyUI? If not, google that and install it. Then when you load the workflow it will probably warn you of a lot of missing nodes that need installing, in particular there are "use_everywhere" looks missing at least, and possibly some others. ComfyUI manager will help you install the missing nodes.
@sagramore , Of course I have a manager, when loading the workflow I have no missing nodes
@tany6666372 Sorry, then I have no idea! It is possible that somehow some of the "use_everywhere" nodes became disconnected in your attempt? Renaming any of the nodes might cause some parts to break as there are regex expressions in some of them.
Minimum VRAM in order for it to work?
Ultimately that depends on your chosen model. I mostly use the wan2.1_i2v_480p_14B_fp8_e4m3fn to make 480p video. Doing 3 second videos, 20 FPS interpolated up to 60 FPS takes somewhere around 10 minutes to generate on my 16 GB 4080. That's the only card I have to test on though!
Not sure what is going on with mine. I followed the workflow and made mine look identical to the picture you provided but when I run it, it hangs up at about 34% and then gets stuck at "reconnecting". Ill attach a picture of what I am seeing when I have a chance to. Any idea what is going on?
So I'm absolutely not an expert at all, but "reconnecting" on ComfyUI suggests that the whole python engine in the background has crashed/failed. Does it show anything in the command prompt window before/after it hangs?
@sagramore
below is what i get. if i press any key the command window closes
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
got prompt
VAE load device: cuda:0, offload device: cpu, dtype: torch.float32
Requested to load CLIPVisionModelProjection
loaded completely 5691.8 1208.09814453125 True
Using scaled fp8: fp8 matrix mult: False, scale input: False
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
SELECTED: input2
D:\matrix\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable>pause
Press any key to continue . . .
@Phoena14 Very sorry, but I've no idea! I don't know why it says "SELECTED: input2" or has the line after that. Have you done the usual update everything by using the "ComfyUI_windows_portable\update\update_comfyui_and_python_dependenceis.bat" script and see if that helps?
@sagramore just ran the update script but that didn't change anything as far as I can tell. I've also tried launching it out of the 3 different methods, cpu and GPU and the middle option...
Hello friend/author. Your workflow was the first one downloaded on this site. Help me understand your workflow. Where is the video duration calculated - only in the wanimgforvideo column? And how to reduce the frame rate from 60 to 24? I changed the lowframe, but the video still lasts no longer than a second. Thanks in advance for your help.
Hello. So, the video first generates a video at the "Low frame rate" and the duration of the original video is given by the WAN Image to Video node "length", which is the number of frames to generate. It will add 1 to whatever number you enter, that's normal. E.g. if you have "low frame rate" at 16 (the default for WAN) and you choose a lenght of 32, it will show "33" for a 2 second video.
Now, the interpolation can mess with this a bit. There is a node "Interpolation multiplier". By default the workflow will take this number and multiply it by the "low frame rate", e.g. if it's 3 then you would get 3 x 16 = 48 frames per second, in the final video. However, because it's interpolated, it will still only last for 2 seconds - the same as before.
If you just want the final video to be the same frame rate as "low frame rate" then make the interpolation multiplier value = 1.
You can be more custom/complicated with it by manually setting the "final video" frame rate. This can lead to changing the duration/speed of the final video. That requires you to right-click the "Final 60 FPS Video" node, go to "UE Connectable Widgets" and unselect "frame_rate". Then you can enter your own in the text box. You can re-enable it by re-selecting it in the same menu.
Hope that helps!
how do i lower the framerate to 30?
If you adjust the interpolation multiplier node. It needs to be an int though, so you might need to adjust the low frame rate to 15 or something.
sagramore thank you
Hi, there's a problem I met, and not sure how to solve it. Really needs some help! It says"VFI model RIFE requires at least 2 frames to work with, only found 1. Please check the frame input using PreviewImage." Do I have to upload two images or..?
It sounds like you are only generating 1 frame in your video. Ensure that the "length" option for the latent video is more than 2!
sagramore Thanks so much for the reply! I did change the length value to about 240 in "vedio and sampler"tag, but it didn''t work and this error shows up.. Even I used the default value 5, change nothing and just upload a image try to run it, this error still exists..
cn13240338340 Hi again. The only thing I can recommend is perhaps to try re-downloading the file and loading with all default settings. Then modify the "length" to be something between 32 and 81 (I believe WAN 2.1 was trained with 81 as a frame limit or something similar).
If you still get the error at that point then I am afraid I am not sure what is happening.
sagramore Thanks for your advice! Not sure why it still doesn't work, but I really appreciate you replying to me! Hopefully I'll figure out the reason myself later~
OMG. Thank you!!!! Very new to AI and ComfyUI. Been learning for a couple weeks. I was struggling to find an image-to-video and a text-to-video that actually does what it says. These worked right out of the box for me. Grateful. BTW, your loras are fire too!!!
Appreciate the comment, glad you like it! I'd love it if you upload any of your videos if you tag this workflow in the resources so I can see them pop up here :)
im new to ai, complete beginer, what will i do once i downloaded this "workflow?" Should i put it on a folder or something? if its folder, what folder? sorry noob question
You need to download and install comfyui, if you google a tutorial for this there are plenty of videos around that can help!
Once you have installed it and it is running, you can open the workflow in comfyui (either in the GUI itself with the menu options or simply by dragging the file into the comfyui window).
It does need some custom nodes installing, but if you also install comfyui manager (another google!) then that will help you do this easily.
I'm a beginer, there are a lot of basic stuff I don't know so please help me a little bit friends. I downloaded this workflow, imported the nodes, download the models (I guess) yet still get this error:
Prompt outputs failed validation: RIFE VFI: - Required input is missing: framesHow can I resolve this, chatGPT didn't help much.
I'm encountering the same issue,the “frames” in Interpolation aren't connected. Which upstream node should they be linked to? I'm using the latest version of ComfyUI. The workflow imported without errors, and no nodes were missing. I'm urgently awaiting your response. Thank you!
@aries011788316 Hi, sorry I've not been on civitai for a while so I'm a bit out of date. I loaded mine up and I had to update my nodes and comfyui. Once I did that, some of the "anything anywhere" nodes broke so the links have broken.
You can reconnect them manually, but I will try to upload an updated version as well!
Hey Sagramore, thank you for the workflow. Which ComfyUI version do you use? I too get the RIFE error and the deeper I dig, the more it looks like it's caused by the newer Comfy versions validation rules breaking the broadcast nodes.
Hi, sorry I've not been on civitai for a while so I'm a bit out of date. I loaded mine up and I had to update my nodes and comfyui. Once I did that, some of the "anything anywhere" nodes broke so the links have broken.
You can reconnect them manually, but I will try to upload an updated version as well!
Thank you OP for uploading workflow, i tried it but got this error. Seems the image is not connected to the frames part under the interpolation. tried to figure out the connection but nothing is working.
Prompt execution failed
Prompt outputs failed validation: RIFE VFI: - Required input is missing: frames
Hi, sorry I've not been on civitai for a while so I'm a bit out of date. I loaded mine up and I had to update my nodes and comfyui. Once I did that, some of the "anything anywhere" nodes broke so the links have broken.
You can reconnect them manually, but I will try to upload an updated version as well!
Sadly, it doesn't work on my end, it keeps saying something is missing, just like the {2.2} to the output. :(
