CivArchive
    WAN2.2 5B - Unlimited Long Video Generation Loop - v1.0

    Unlock the potential for infinite video narratives with this powerful ComfyUI workflow. Designed specifically for the WAN2.2 5B text-to-video model, this setup automates the creation of long, coherent video sequences by implementing an intelligent feedback loop. It doesn't just string clips together; it creates a visually consistent and dynamically evolving story.

    ✨ Key Features & Highlights:

    • AI-Powered Prompt Chaining: The core of this workflow. An Ollama multi-modal LLM (like Qwen2.5-VL) analyzes the last frame of each generated video clip and automatically creates a new, detailed prompt for the next segment. This ensures each new clip logically continues from the previous one.

    • Perfect for Long-Form Content: Generate multi-part scenes, evolving transformations, or endless walking cycles without manual intervention. The loop is configurable to run for any number of iterations.

    • Superior Visual Consistency: Incorporates a color matching node (easy imageColorMatch) to harmonize the colors and tones between segments, preventing jarring visual jumps and creating a seamless flow.

    • Built-In Quality Enhancement: Includes a RIFE VFI frame interpolation node that doubles the frame rate of the final assembled video, resulting in buttery-smooth motion.

    • Fully Automated Pipeline: From loading the initial image to rendering the final high-quality video, the process is hands-free after the initial setup.

    πŸ› οΈ How It Works:

    1. Preparation: The workflow starts with your initial image, which is scaled and analyzed.

    2. Ollama Vision Analysis: The LLM examines the image and generates a dynamic, movement-focused prompt tailored for the WAN2.2 model.

    3. Video Generation: The WAN2.2 5B model generates a short video clip (~5 seconds) based on this AI-crafted prompt.

    4. Loop & Refine: The last frame is extracted, color-corrected, and fed back to Ollama to generate the next prompt. This loop repeats for your set number of iterations.

    5. Final Assembly: All individual clips are combined into a single, smooth, long-form video file.

    πŸ“¦ What's Included:

    • .json Workflow file for ComfyUI.

    • A detailed breakdown of the node groups and their functions.

    • Recommended settings for optimal results.

    βš™οΈ Recommended Models:

    • Text-Image-to-Video: wan2.2_ti2v_5B_fp16.safetensors

    • LoRA: Wan2_2_5B_FastWanFullAttn_lora_rank_128_bf16.safetensors (for faster generation)

    • VAE: wan2.2_vae.safetensors

    • LLM (for Ollama): A vision-capable model like qwen2.5-vl:7b or llava-1.6

    🎯 Ideal For:

    • Creating music videos with evolving visuals.

    • Generating long animations and story sequences.

    • Producing dynamic social media content loops.

    • Experimenting with AI-driven storytelling and scene progression.

    Disclaimer: This workflow requires a properly configured ComfyUI environment with the necessary custom nodes (ComfyUI-Easy-Use, Video-Helper-Suite, ComfyUI-Ollama, ComfyUI-Frame-Interpolation) and an Ollama server running with a vision model.

    Description

    FAQ

    Comments (9)

    fox23vang226Aug 24, 2025Β· 1 reaction
    CivitAI

    Ive tried running a few of these and the problem with wan2.2 currently is that after the 2nd loop the last frame starts to deteriorate significantly, after the 4th loop you may get something drastically different and much lower quality than the original image that doesn't match the overall composition.

    zardozai
    Author
    Aug 24, 2025Β· 1 reaction
    You can't reduce the image resolution below 0.85 megapixels, as that would cause the exact problem you're experiencing.

    TurboCoomerAug 25, 2025

    thats kinda expected from a workflow that uses last frame with its degraded quality and llm that writes a completely new prompt for each cycle πŸ˜‚

    zardozai
    Author
    Aug 26, 2025

    @TurboCoomerΒ Yes, you can select "keep context" in the Ollama node and edit the prompt directly for a more personalized storyline.

    iPiKoAug 25, 2025
    CivitAI

    Very nice, is there any node to use AI context. or connecting it to studio LLM?

    zardozai
    Author
    Aug 26, 2025

    Thanks πŸ™ and to answer your questions not from my knowledge.

    habibingSep 23, 2025
    CivitAI

    Your vram management is great, thanks for showing examples of where and when to use unload node on native ksampler workflow.

    douglasrivitti789Sep 25, 2025
    CivitAI

    How can I install that ''qwen2.5-vl:7b or llava-1.6'' models?? Is nstw viable?

    sdktertiaire2Oct 5, 2025Β· 1 reaction

    hello,

    You have to give the explicit way to start ollama if your models aren't store at the ollama front end app. (which is different than ollama server).
    for linux and windows, you can pull modele in bash

    For windows you also can via new front end app.

    Here is a fully code for windows (in Bash) to help you that way. Open a bash terminal, create a file (touch ollama_manager.sh), edit it ( nano ollama_manager.sh) and paste :

    #!/usr/bin/env bash

    # ======================================================

    # 🧠 OLLAMA PROCESS MANAGER - Machine5 Pro v4.8

    # Author: GPT-5 (custom for Machine5)

    # ------------------------------------------------------

    # βœ… Auto-detect and set OLLAMA_MODELS (E:\modeles_ollama)

    # βœ… LAN mode via OLLAMA_HOST=0.0.0.0:11434

    # βœ… Q to quit logs (non-blocking)

    # βœ… Scan LAN + prompt over LAN ready

    # ======================================================

    USER_NAME=${USERNAME:-$(whoami)}

    OLLAMA_DIR="/c/Users/$USER_NAME/AppData/Local/Programs/Ollama"

    OLLAMA_EXE="$OLLAMA_DIR/ollama.exe"

    PORT=11434

    LOG_DIR="/c/Users/$USER_NAME/.ollama"

    LOG_FILE="$LOG_DIR/ollama_live.log"

    # ------------------------------------------------------

    # 🧩 Auto-detect OLLAMA_MODELS location

    detect_models_dir() {

    local default_dir="/c/Users/$USER_NAME/.ollama/models"

    local e_drive_dir="/e/modeles_ollama"

    if [ -d "$e_drive_dir" ]; then

    echo "$e_drive_dir"

    elif [ -d "$default_dir" ]; then

    echo "$default_dir"

    else

    echo "/c/Users/$USER_NAME/.ollama/models"

    fi

    }

    # ------------------------------------------------------

    header() {

    clear

    echo "======================================="

    echo " 🧠 OLLAMA PROCESS MANAGER v4.8"

    echo "======================================="

    echo "User : $USER_NAME"

    echo "Path : $OLLAMA_EXE"

    echo "Port : $PORT"

    echo "LAN : $(hostname)"

    echo "---------------------------------------"

    }

    detect_ip() {

    local ip

    ip=$(ipconfig | grep "IPv4" | grep -v "127.0.0.1" | awk '{print $NF}' | head -n 1)

    [[ -z "$ip" ]] && ip="Not detected"

    echo "$ip"

    }

    is_running() {

    tasklist | grep -Ei "ollama\.exe" >/dev/null 2>&1

    }

    # ------------------------------------------------------

    start_ollama() {

    if is_running; then

    echo "⚠️ Ollama is already running."

    else

    echo "πŸš€ Starting Ollama server (LAN 0.0.0.0)..."

    mkdir -p "$LOG_DIR"

    taskkill //IM "Ollama App.exe" //F >/dev/null 2>&1

    MODELS_DIR=$(detect_models_dir)

    export OLLAMA_MODELS="$MODELS_DIR"

    export OLLAMA_HOST="0.0.0.0:11434"

    echo "🌐 Environment set:"

    echo " OLLAMA_HOST=$OLLAMA_HOST"

    echo " OLLAMA_MODELS=$OLLAMA_MODELS"

    echo

    nohup "$OLLAMA_EXE" serve >"$LOG_FILE" 2>&1 &

    sleep 3

    if is_running; then

    echo "βœ… Ollama started successfully."

    else

    echo "❌ Failed to start Ollama."

    fi

    fi

    echo

    }

    # ------------------------------------------------------

    stop_ollama() {

    echo "πŸ›‘ Stopping Ollama..."

    taskkill //IM ollama.exe //F >/dev/null 2>&1

    taskkill //IM Ollama.exe //F >/dev/null 2>&1

    taskkill //IM "Ollama App.exe" //F >/dev/null 2>&1

    echo "βœ… Ollama stopped."

    echo

    }

    # ------------------------------------------------------

    check_status() {

    if is_running; then

    echo "βœ… Ollama process is RUNNING"

    if netstat -ano | grep -q "0\.0\.0\.0:$PORT"; then

    echo "🌐 LAN mode: βœ… ACTIVE (IPv4)"

    elif netstat -ano | grep -q "\[::\]:$PORT"; then

    echo "🌐 LAN mode: βœ… ACTIVE (IPv6)"

    else

    echo "πŸ”’ LAN mode: ❌ LOCAL ONLY"

    fi

    echo "πŸ“‚ Model directory: $(detect_models_dir)"

    else

    echo "πŸŸ₯ Ollama process is STOPPED"

    fi

    echo

    }

    # ------------------------------------------------------

    show_logs() {

    [[ ! -f "$LOG_FILE" ]] && echo "ℹ️ No log file found." && return

    echo "πŸ“œ Displaying live logs from: $LOG_FILE"

    echo "🟒 Press [Q] to quit log view."

    echo "---------------------------------------"

    tail -f "$LOG_FILE" &

    TAIL_PID=$!

    while true; do

    read -t 1 -n 1 key

    if [[ "$key" == "q" || "$key" == "Q" ]]; then

    kill $TAIL_PID 2>/dev/null; wait $TAIL_PID 2>/dev/null

    echo; echo "βœ… Log viewer closed."; echo

    break

    fi

    done

    }

    # ------------------------------------------------------

    lan_scan() {

    echo "🌐 Scanning LAN for active Ollama servers..."

    ip_base=$(detect_ip | cut -d'.' -f1-3)

    [[ "$ip_base" == "Not" ]] && echo "❌ Unable to detect subnet." && return

    echo "Subnet detected: ${ip_base}.0/24"

    echo "---------------------------------------"

    for i in $(seq 1 254); do

    host="${ip_base}.${i}"

    (

    data=$(curl -s --max-time 1 "http://${host}:${PORT}/api/tags")

    if [[ -n "$data" ]]; then

    version=$(curl -s --max-time 1 "http://${host}:${PORT}/api/version" | jq -r '.version' 2>/dev/null)

    models=$(echo "$data" | jq -r '.models[]?.name' 2>/dev/null | tr '\n' ',' | sed 's/,$//')

    echo "βœ… Ollama @ ${host}:${PORT} | Ver: ${version:-unknown} | Models: ${models:-none}"

    fi

    ) &

    done

    wait

    echo "---------------------------------------"

    echo "βœ… Scan complete."

    echo

    }

    # ------------------------------------------------------

    test_api() {

    local ip_local ip_lan

    ip_local="localhost"

    ip_lan=$(detect_ip)

    echo "🌐 Testing Ollama API..."

    echo "---------------------------------------"

    for target in "$ip_local" "$ip_lan"; do

    echo "β†’ Testing http://$target:$PORT/api/tags"

    response=$(curl -s --max-time 3 "http://$target:$PORT/api/tags")

    [[ -n "$response" ]] && echo "βœ… $target responded." || echo "⚠️ $target no response."

    echo

    done

    }

    # ------------------------------------------------------

    menu() {

    header

    check_status

    echo "1️⃣ Start Ollama"

    echo "2️⃣ Stop Ollama"

    echo "3️⃣ Show live logs (press Q to quit)"

    echo "4️⃣ Restart Ollama"

    echo "5️⃣ Scan LAN for Ollama servers"

    echo "6️⃣ Test API (local + LAN)"

    echo "0️⃣ Exit"

    echo

    read -p "Choose an option [0-6]: " choice

    case $choice in

    1) start_ollama ;;

    2) stop_ollama ;;

    3) show_logs ;;

    4) stop_ollama; start_ollama ;;

    5) lan_scan ;;

    6) test_api ;;

    0) echo "πŸ‘‹ Exiting..."; exit 0 ;;

    *) echo "❌ Invalid choice."; sleep 1 ;;

    esac

    read -p "Press Enter to return to menu..."

    menu

    }

    menu




    According to that way you should be able to select in the comfyui front end, the model you have downloaded before. Really simple.


    I think that for nsfw, you mus'nt use lora lighting that breaks nsfw use.

    SDKtertiaire2


    Workflows
    Wan Video 2.2 TI2V-5B

    Details

    Downloads
    1,803
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/24/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    wan225BUnlimitedLong_v10.zip

    Mirrors

    CivitAI (1 mirrors)