CivArchive
    Runpod WAN 2.1 Img2Video Template - ComfyUI - CUDA-2.5-T1.0
    NSFW
    Preview 61272925
    Preview 61272922
    Preview 61272915

    🎥 WanVideo ComfyUI RunPod Setup Guide

    This comprehensive guide will walk you through setting up and using the WanVideo ComfyUI environment on RunPod for AI video generation. Wan needs alot or vram to get outputs ata resonable speed. 48GB for $0.44 an hour is a pretty good deal IMO

    Nothing to download from here. This is a Runpod Template with all the models and workflows added.

    WAN 2.1 Video - ComfyUI Full - T2.0 - Running on CUDA 2.5

    https://runpod.io/console/deploy?template=6k2saccgx8&ref=0eayrc3z

    ## UPDATE
    20/03/25
    Notice a Pytorch Bug on using Community Cloud GPUs. 
    EDIT: Its with just the 5090 cards with the blackwell architecture
    
    19/03/25
    'Error: 'NoneType' object is not callable fix'
    I added a depth 0 to ComfyUI and implemented nodes to reduce the container size, but these changes introduced several bugs. I've since removed them, and everything should now work much better.
    
    17/03/25
    - Added environment variables to control setup behavior:
      - `SKIP_DOWNLOADS=true`: Skip downloading models
      - `SKIP_NODES=true`: Skip verification custom nodes (nodes are packaged into the container for faster build)

    🚀 Getting Started

    Step 1: Deploying Your Pod

    1. Sign up/login to RunPod

    2. Navigate to "Deploy" → "Template"

    3. Search for "WAN 2.1 Video - ComfyUI Full - T1.0" template

    4. Select the hardware:

      • Recommended GPU: RTX A40 (minimum 48GB VRAM)

      • Storage: 60GB minimum (100GB recommended)

      • Filter GPU's above CUDA 2.4

    5. Click "Deploy" to launch your pod

    Step 2: Initial Setup

    Once deployed, your pod will automatically:

    • Download all required models (takes ~10 minutes)

    • Install custom nodes

    • Set up the environment

    You'll see the following message when setup is complete:

    ⭐⭐⭐⭐⭐   ALL DONE - STARTING COMFYUI ⭐⭐⭐⭐⭐
    



    Step 3: Accessing Your Environment

    From your pod's detail page, access:

    1. ComfyUI Interface:

      • Port 8188 (primary interface for creating videos)

      • Wait for this to turn green after setup completes

    2. JupyterLab:

      • Port 8888 (available immediately, even during setup)

      • Use for file management, terminal access, and notebook interactions

    3. Image Browser:

      • Port 8181 (for managing your output files)

      • View and organize generated videos and images


    💾 Managing Models

    Downloading Additional Models

    This will only download everything you need for Wan_Image2Video_720pAFIX.json workflow. If you want to use the other workflows run the ./download-files.sh from terminal and it will download all the models for Kajai's workflow.

    Use the flexible model download system:

    1. Edit the configuration file:

      files.txt
      

    2. Add model entries using this format:

      type|folder|filename|url
      

      Examples:

      normal|checkpoints|realistic_model.safetensors|https://huggingface.co/org/model/resolve/main/model.safetensors
      
      gdrive|loras|animation_style.safetensors|https://drive.google.com/uc?id=your_file_id
      
    3. Run the download script:

      ./download-files.sh
      

    WAN 2.1 Models

    https://huggingface.co/Kijai/WanVideo_comfy/tree/main

    https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files

    Pre-installed Models

    The template comes with these key models:

    • Wan 2.1 Models:

      • Wan2_1-I2V-14B-720P_fp8_e4m3fn.safetensors (Base video model)

      • wan_2.1_vae.safetensors (VAE)

    • Text Encoders:

      • umt5_xxl_fp16.safetensors (Advanced text encoder)

    • CLIP Vision:

      • clip_vision_h.safetensors (Enhanced vision model)

    🎨 Using ComfyUI for Video Generation

    Step 1: Load a Workflow

    1. Access ComfyUI interface (port 8188)

    2. Click on the folder icon in the top menu

    3. Navigate to the default workflows folder

    4. Select one of the pre-configured workflows

    Step 2: Customize Your Generation

    1. Modify text prompts to describe your desired video

    2. Adjust settings:

      • CFG Scale: 7-9 recommended for quality (higher = more prompt adherence)

      • Steps: 25+ for better quality (more steps = more refinement)

      • Resolution: Start with 512x512 for tests, increase for final outputs

      • Frame count: Determines video length

    Step 3: Generate and View Results

    1. Click "Queue Prompt" to start generation

    2. Monitor progress in the ComfyUI interface

    3. When complete, view your video in the output panel

    4. Access all outputs via the Image Browser (port 8181)

    📊 Managing Your Files

    Using JupyterLab

    1. Access JupyterLab (port 8888)

    2. The workspace folder contains:

      • /ComfyUI - Main application and models

      • Files for downloading additional models

      • Notebook for image/video browsing

    Using Image Browser

    1. Access the browser interface (port 8181)

    2. Browse your generated content by:

      • Creation date

      • Filename

      • Metadata

    3. Right-click on items for additional options (download, delete, etc.)

    🔧 Advanced Features

    SSH Access

    To enable SSH:

    1. Set your PUBLIC_KEY in the template settings before deployment

    2. Connect using the command shown in your pod's connect options

    Custom Nodes

    The template includes these pre-installed node collections:

    • Workflow utilities (cg-use-everywhere, ComfyUI-Manager)

    • UI enhancements (rgthree-comfy, was-node-suite-comfyui)

    • Video-specific nodes (ComfyUI-WanVideoWrapper, ComfyUI-VideoHelperSuite)

    • Performance optimizers (ComfyUI-Impact-Pack)

    🛠️ Troubleshooting

    If you encounter issues:

    1. ComfyUI not starting:

      • Check JupyterLab terminal for logs

      • Ensure models downloaded correctly

    2. Models not loading:

      • Verify files exist in /ComfyUI/models/ directories

      • Check file sizes to ensure complete downloads

    3. Custom node problems:

      • Try reinstalling via ComfyUI Manager

      • Restart your pod if necessary

    🎯 Tips for Best Results

    • Use detailed prompts with specific descriptions

    • Increase CFG and step count for higher quality videos

    • Save your successful workflows for future use

    • Monitor VRAM usage and adjust resolution accordingly

    • Use the Image Browser to organize and review your outputs

    Need more help? Check the readme.md file in JupyterLab or reach out to the RunPod community!

    Description

    1. ComfyUI Interface:

      • Port 8188 (primary interface for creating videos)

      • Wait for this to turn green after setup completes

    2. JupyterLab:

      • Port 8888 (available immediately, even during setup)

      • Use for file management, terminal access, and notebook interactions

    3. Image Browser:

      • Port 8181 (for managing your output files)

      • View and organize generated videos and images

    FAQ

    Comments (18)

    Enokk225Mar 10, 2025
    CivitAI

    "error starting container: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: requirement error: unsatisfied condition: cuda>=12.5, please update your driver to a newer version, or use an earlier cuda container: unknown"

    DIhan
    Author
    Mar 11, 2025

    which GPU did you use?

    Your host system has NVIDIA drivers that don't support CUDA 12.5

    Enokk225Mar 14, 2025
    CivitAI

    Would make templates on Vast.ai?

    DIhan
    Author
    Mar 15, 2025

    Something like this should do it. Havent had a chance to test. let me know if you run into issues. ill try setting up one this weekend

    https://cloud.vast.ai/?ref_id=218806&creator_id=218806&name=Runpod%20WAN%202.1%20Img2Video%20Template%20-%20ComfyUI

    aitrancerMar 19, 2025· 1 reaction
    CivitAI

    Error: 'NoneType' object is not callable

    There seems to be a bug in the current Comfy Version. If you get this, you need to reset Comfy to the latest working commit. In (web) terminal:

    cd /workspace/ComfyUI
    git fetch --unshallow
    git reset --hard e8e990d6b8b5c813c87d1aeaed3e5110c7aba166

    DIhan
    Author
    Mar 19, 2025

    Thanks for the heads up. I'll test it out and make any adjustments to avoid this

    DIhan
    Author
    Mar 19, 2025· 1 reaction

    updated the fix

    aitrancerMar 30, 2025· 2 reactions
    CivitAI

    Thank you! This container is my favorite :)

    snobbias124Apr 25, 2025
    CivitAI

    It doesn't seem to work with a 5090 (or any 5000 series, I'm guessing):

    CLIPTextEncode

    CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

    snobbias124Apr 25, 2025

    Updating cuda is a bit of a hassle inside a pod, I noticed. For starters, comfyui does not seem to use a virtual environment. I guess it could be done anyway with some knowledge but I had to give up since it kept using the space on the pod and not my added network space which led till out of space all the time.

    DIhan
    Author
    May 2, 2025

    Correct, I havent had a chance to work out the bugs.

    compo6628585May 20, 2025· 2 reactions
    CivitAI

    Love this. Thank you so much for this easy to use workflow using runpod! Could i ask if it would be possible to a first/last frame version, using the same method?

    DIhan
    Author
    May 23, 2025· 1 reaction

    I do need to update the Templates. There are so many new VACE workflows etc I would like to add but you can always drag in a workflow like the first and last frame ones.

    compo6628585May 23, 2025· 1 reaction

    @DIhan The thing about your template that makes it so great, is the fact its all pre-installed. Im really new to all this, and can barely get any workflow to work because i need to download large models to make them run (i have limited harddrive space haha), they can be way over complicated too. this template was easy to understand and learn from, and does the job perfectly. ive adjusted it a little adding the power lora loader and color matcher, and thats it. I have a totally seperate workflow for upscaling and interpolation. Honestly if you could make another one nice and simple like this for start/end frame, id be creamy lolz. Thanks again mate, appreciate it!

    P.S I think i would have given up on I2V without this template and your well explained tutorial/instructions. Its now turned into an a addiction haha.

    breakfastcerealbeats405Jun 16, 2025· 1 reaction
    CivitAI

    I'm definitely going to need this in the future when I start making my way around WAN more comfortably. Having never even heard about runpod like a week ago, I'm stoked. 48c/hr for some crazy GPU usage sounds like a godsend. Thanks for making this man, laymans like me are going to get a lot of use out of it. You should consider posting to the SD reddit, if you haven't already. They always need good information, and direction.

    compo6628585Jul 31, 2025
    CivitAI

    Stopped working today...literally within a few hrs. I was on it earlier (maybe 4-5 hrs ago). Now pod keeps getting same error:

    Error: clip_vision_h.safetensors is missing from clip_vision directory❌ Error: umt5_xxl_fp16.safetensors is missing from text_encoders directory⚠️ Some models failed to download or verify

    The pod wont even launch to load the models myself inside. Really appreciate if you can fix. I'll report back if it starts working again, but ive had this same error 3 times in a row.

    Update: Headsup.

    For some reason my pod got corrupted (maybe?), as when i tried launching a new one with this template it worked fine. Which is cool it still works, but minus my Loras, WF's & generations that ive been too lazy to download/backup. Bittersweet 😥😊

    DIhan
    Author
    Aug 3, 2025· 1 reaction

    Yeah my volume has got a corrupted a few times too and had to start from scratch.

    if you not doing already use the T2 version which i updated and runs the dependencies better

    https://console.runpod.io/deploy?template=6k2saccgx8&ref=0eayrc3z

    deepworld23Aug 31, 2025
    CivitAI

    Hey love the set up, just curious, I installed everything on to my runpod storage so I can keep everything stored for later, will it only work with whatever GPU I used to install it? I tried using a more powerful GPU after installing to see if I could speed things up but I kept getting a Clip Vision Encode Kernel error. When I went back to the GPU I used to install, everything worked fine.

    Other
    Other

    Details

    Downloads
    621
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/3/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    runpodWAN21Img2video_cuda25T10.zip

    Mirrors