CivArchive
    【WAN2.2】TXT to VIDEO - v1.1
    Preview 92088873
    Preview 92088911

    WAN2.2 — Text to video — Simple Workflow

    A clean, all-in-one WAN text-to-video workflow built entirely with the UmeAiRT Toolkit for ComfyUI.
    Only 12 nodes. No spaghetti wires. Just load your model, write your prompt, and hit generate.


    ⚠️ IMPORTANT — Nodes 2.0 Required

    This workflow is built for the Nodes 2.0 (Vue) interface of ComfyUI. If you don't enable it, the workflow may have display problems.

    How to activate Nodes 2.0:

    1. Open ComfyUI

    2. Go to Settings (⚙️ icon, bottom-left)

    3. Find "Use Nodes V2 (Vue)" and toggle it ON

    4. Refresh the page

    5. Load the workflow

    If you prefer the classic interface, check out my Legacy version of this workflow instead (link).


    🎯 Features

    • Text-to-Image generation

    • Automatic download of models in auto version

    • Built-in SeedVR2 upscaler — high-quality tiled upscaling (toggleable on/off) Slower than a classic upscaler, but significantly better quality

    • Full metadata embedding — your images are saved with all generation parameters, ready for online publishing and remixing

    • 3 LoRA slots — with individual on/off toggles and strength control and you can connect as many other lora modules to each other for as many LoRA as you want.


    📦 Custom Node Required

    Only one custom node to install:

    👉 ComfyUI-UmeAiRT-Toolkit

    Install via ComfyUI Manager (search "UmeAiRT") or use the UmeAiRT Auto-Installer.
    The Toolkit packages everything internally — upscaler, face detailer, metadata saver. No other custom nodes needed.


    📂 Files you need (in manual version)

    For base version
    T2V Model : wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors and wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors
    In models/diffusion_models

    For GGUF version
    T2V Quant Model : wan2.2_t2v_high_noise_14B_QX.gguf and wan2.2_t2v_low_noise_14B_QX.gguf
    In models/unet

    For lightx2V version
    lightx2V LoRA : Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors

    CLIP: umt5_xxl_fp8_e4m3fn_scaled.safetensors
    in models/clip

    VAE: wan_2.1_vae.safetensors
    in models/vae

    ANY upscale model:

    in models/upscale_models

    Description

    New frame rate slider,
    Speed LoRA correction.

    FAQ

    Comments (9)

    xuanwoaAug 3, 2025
    CivitAI

    There's an issue with the T2V workflow(v1.1): The "Frame rate" node's value is not connected to the "calculFrames" node.

    UmeAiRT
    Author
    Aug 3, 2025

    Thanks maybe a bad copy from the gguf version, fixed now

    xuanwoaAug 3, 2025
    CivitAI

    After my testing, I found that the video quality from Kijai's nodes and models is better (referring to the default workflow in the Kijai repository). However, I really like the features of the workflow you provided. Could you create a version of your workflow using Kijai nodes?

    UmeAiRT
    Author
    Aug 3, 2025

    I would love to use kijai nodes, but they are not GGUF compatible, and I don't have a graphics card powerful enough to use the fp16 model

    xuanwoaAug 3, 2025

    UmeAiRT Thank you anyway, I really like your workflow.

    xuanwoaAug 3, 2025· 1 reaction
    CivitAI

    I've replicated a simplified Kijai version of the UmeAiRT workflow.
    https://civitai.com/images/92147686

    CyberAImaniaAug 4, 2025· 2 reactions
    CivitAI

    Its something mixed in your workflow. TXT to VIDEO (gguf).json is actually first frame last frame

    Ayy. Think the wrong file was uploaded. At least for the base GGUF file

    UmeAiRT
    Author
    Aug 4, 2025· 2 reactions

    Maybe comfyui overwrite my files, i change it when i'm back home

    Workflows
    Wan Video 14B t2v

    Details

    Downloads
    320
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/3/2025
    Updated
    5/16/2026
    Deleted
    -

    Files

    WAN22TXTToVIDEO_v11.zip

    Mirrors

    CivitAI (1 mirrors)