CivArchive
    Preview 121171621

    โœจOne-click Pod available on:โœจ

    ๐ŸŸก VastAI ComfyUI 0.15.0 CUDA 13.0 for 5090

    ๐ŸŸก VastAI ComfyUI 0.15.0 CUDA 13.0 for 4090

    ๐ŸŸฃ Runpod ComfyUI 0.15.1 CUDA 12.8 for 5090

    ๐ŸŸฃ Runpod ComfyUI 0.15.1 CUDA 12.4 for 4090

    Just click on the link, choose a Video Card and the Template will install all you need + all my workflows.

    Wan 2.2 Models are not included, you can install it using Civicomfy or ComfyUI-HuggingFace directly inside ComfyUI

    IMPORTANT:

    If you install RES4LYF node it will broke the MoEKSampler, to use it you have to use the KSampler included in that node.

    ๐Ÿ”ฅ 02/21/26 UPDATE ๐Ÿ”ฅ

    • Added LoRa Trigger Words Text Box

    • Addes Keep Last Prompt feature to keep the last generated prompt

    • Enhanced NSFW Support

    • NEW WF FOR I2V AND T2V STORY GENERATION

    โœจ NEW NODES โœจ

    EASY MODEL DOWNLOAD FROM HUGGINGFACE

    cd ComfyUI/custom_nodes
    git clone https://github.com/huchukato/ComfyUI-HuggingFace.git

    RIFE INTERPOLATION WITH TENSORRT

    with Auto Install (CUDA 12/CUDA 13) and Auto Model Download

    cd ComfyUI/custom_nodes
    git clone https://github.com/huchukato/ComfyUI-RIFE-TensorRT-Auto.git

    โœจMY QWEN3-VL NODE IS FINALLY IN THE MANAGERโœจ

    Just search for "QwenVL-Mod" to install it

    or dowload it from GitHub: QwenVL-Mod: Enhanced Vision-Language

    ComfyUI-QwenVL-Mod - Enhanced Vision-Language with WAN 2.2

    Version 2.2.2 (2026/02/19) - ๐Ÿš€ Critical T2V/I2V Fixes + ComfyUI Optimizations


    ๐ŸŒŸ What is ComfyUI-QwenVL-Mod?

    A powerful enhanced vision-language node for ComfyUI that combines Qwen3-VL models with professional WAN 2.2 video generation workflows. Features multilingual support, visual style detection, and NSFW capabilities for professional AI content creation.

    Think: "Your all-in-one solution for intelligent prompt enhancement and video generation with cutting-edge AI models!"


    ๐ŸŽฌ Key Features

    ๐Ÿš€ WAN 2.2 Video Generation

    • Text-to-Video (T2V): Professional 5-second video generation

    • Image-to-Video (I2V): Advanced image animation with style detection

    • Story Generation: 20-second continuous videos with 4 narrative segments

    • Storyboard Workflows: Seamless storyboard-to-storyboard generation

    • Cinematic Video: Professional cinematography specifications

    ๐ŸŒ Enhanced Capabilities

    • Multilingual Support: Process prompts from any language (Italian, English, etc.)

    • Visual Style Detection: 12+ artistic styles (anime, 3D, pixel art, puppet animation, etc.)

    • Smart Prompt Caching: Performance optimization with Fixed Seed Mode

    • GGUF Backend: Efficient local model inference with quantization support

    • NSFW Support: Comprehensive content generation without restrictions

    ๐Ÿง  Intelligent Features

    • Auto-Prompt Enhancement: Automatically enhance user prompts for optimal generation

    • Professional Cinematography: Built-in specifications for lighting, camera angles, shot types

    • Timeline Structure: Precise 5-second timeline with frame-by-frame descriptions

    • Keep Last Prompt: Generate once, preserve results while changing inputs


    ๐ŸŽฏ What's New in v2.2.2

    ๐Ÿš€ Critical T2V/I2V Workflow Fixes

    • Batch Processing: Fixed critical T2V โ†’ GGUF issue with batch images

    • Frame Detection: Added automatic batch detection and individual frame processing

    • Video Support: Enhanced video frame processing with proper shape handling

    • Debug Enhanced: Comprehensive logging for batch processing troubleshooting

    ๐Ÿ”„ Same Model Reuse Fix

    • Conflict Resolution: Fixed crash when using same model between T2V and I2V nodes

    • Memory Management: Enhanced cleanup with CUDA synchronization and timing

    • Signature Mismatch: Resolved different signature patterns between nodes

    • Aggressive Cleanup: Forced complete VRAM cleanup before model reload

    ๐Ÿ”ง keep_model_loaded Enhancement

    • Missing Parameter: Added keep_model_loaded to PromptEnhancer node

    • Consistent Behavior: Both GGUF and PromptEnhancer now have identical memory management

    • Conditional Cleanup: Proper cleanup based on keep_model_loaded setting

    • User Control: Full control over memory usage vs performance


    ๐ŸŽฌ WAN 2.2 Story Workflow - Revolutionary AI Storytelling

    ๐Ÿ“– AI Story Generation

    • 4-Segment Videos: Automatic 20-second videos (4 ร— 5-second segments)

    • Narrative Continuity: Perfect story flow between segments

    • NSFW Support: Enhanced adult content generation

    • Timeline-Free: Natural storytelling without time markers

    ๐Ÿ”„ Smart Auto-Split

    • Story Split Node: Intelligent prompt separation technology

    • Auto-Detection: Handles any separator format automatically

    • 4-Output Guarantee: Always produces exactly 4 prompts

    • Debug Mode: Built-in troubleshooting information


    ๐Ÿ“ฆ Installation

    Requirements

    • ComfyUI: v0.13.0+

    • GPU: 8GB+ VRAM (16GB+ recommended)

    • System: Windows/Linux/Mac

    • Python: 3.10+ (or use provided Docker environment)

    Quick Install

    1. Download ComfyUI-QwenVL-Mod v2.2.0

    2. Extract to ComfyUI/custom_nodes/ComfyUI-QwenVL-Mod

    3. Restart ComfyUI

    4. Load included workflows

    Docker/Cloud Ready

    • RunPod: Pre-configured templates available

    • VastAI: Optimized instances ready

    • Local: Docker support included


    ๐ŸŽฎ Usage Examples

    Basic Text-to-Video

    1. Load WAN2.2-I2V-AutoPrompt.json

    2. Input your text prompt

    3. Select model (HF or GGUF)

    4. Generate enhanced video

    Image-to-Video with Style

    1. Load WAN2.2-I2V-AutoPrompt.json

    2. Upload your image

    3. Enable style detection

    4. Generate animated video

    AI Story Generation

    1. Load WAN2.2-I2V-AutoPrompt-Story.json

    2. Input your story idea

    3. Auto-split into 4 segments

    4. Generate 20-second story video


    ๐Ÿ”ง Technical Specifications

    โšก Performance

    • Context: 65,536 tokens (8B models)

    • Memory: Optimized VRAM usage

    • Stability: Crash-free operation

    • Speed: Fast generation times

    ๐ŸŽจ Model Support

    • Qwen3-VL 4B: 7 GGUF variants (2.38GB-4.28GB)

    • Qwen3-VL 8B: 7 GGUF variants (4.8GB-8.71GB)

    • HF Models: Josiefed and official variants

    • Quantization: Q4_K_S, Q5_K_S for VRAM efficiency

    ๐ŸŒ Multilingual Capabilities

    • Input Languages: Any language supported

    • Auto-Translation: Automatic translation to optimized English

    • Style Detection: Works with multilingual prompts

    • Cultural Adaptation: Context-aware prompt enhancement


    ๐ŸŽฏ Included Workflows

    ๐Ÿฟ WAN 2.2 Presets

    • Wan 2.2 I2V: Image-to-video with timeline structure

    • Wan 2.2 T2V: Text-to-video with professional specs

    • Wan Extended Storyboard: Multi-segment continuity

    • Wan Cinematic Video: Single scene with cinematography

    ๐Ÿ”ฅ Advanced Features

    • NSFW Enhancement: Uncensored content generation

    • Professional Lighting: 8 light types + 9 qualities

    • Camera Control: 6 shot types + 5 compositions

    • Color Grading: 4 tone options


    ๐ŸŽจ Visual Style Detection

    Automatically detects and enhances:

    • Anime style - Japanese animation aesthetics

    • 3D cartoon - Computer-generated animation

    • Pixel art - Retro gaming graphics

    • Puppet animation - Stop-motion style

    • 3D game style - Video game graphics

    • Claymation - Clay animation

    • Watercolor - Painting style

    • Black and white animation - Monochrome

    • Oil painting style - Classical art

    • Felt style - Textile art

    • Tilt-shift photography - Miniature effect

    • Time-lapse photography - Speed photography


    ๐Ÿ”ฅ NSFW Content Support

    Enhanced Generation

    • Explicit Content: Uncensored adult descriptions

    • Detailed Scenes: 8-12 sentences per segment

    • Natural Progression: Smooth story flow

    • Style Adaptation: Automatic visual style matching

    • Quality: Consistent characters & scenes

    Professional Applications

    • Adult Content: Industry-standard generation

    • Artistic Nudity: Classical art styles

    • Educational: Anatomy and artistic study

    • Creative: Artistic expression


    ๐Ÿš€ Why Choose ComfyUI-QwenVL-Mod?

    ๐ŸŽฌ For Content Creators

    • Storytelling: Create compelling narratives

    • Efficiency: One prompt โ†’ complete video

    • Quality: Professional video output

    • Flexibility: Any genre, any style

    ๐Ÿ”ฅ For NSFW Content

    • Explicit: Uncensored generation

    • Detailed: Rich scene descriptions

    • Continuous: Smooth story flow

    • Natural: Realistic progression

    โšก For Power Users

    • Customizable: Easy to modify

    • Extendable: Add more segments

    • Integrable: Works with existing setups

    • Optimized: Maximum performance


    ๐Ÿ“ฅ Download & Support

    Get Started

    1. Download: ComfyUI-QwenVL-Mod v2.2.0

    2. Install: Follow standard installation

    3. Load: Included workflows

    4. Create: Your first AI-enhanced video!

    Community & Support

    • GitHub: Repository

    • Issues: Report bugs and request features

    • Discord: Community support (coming soon)

    • Documentation: Complete guides and tutorials


    ๐ŸŒŸ What Makes This Special?

    • First: Complete AI story system with vision enhancement

    • Smart: Intelligent prompt splitting and enhancement

    • Complete: End-to-end solution from text to video

    • Optimized: Performance-tuned for professional use

    • Ready: Works out-of-the-box with included workflows


    ๐ŸŽฌ Create Amazing AI Videos Today!

    Transform your ideas into stunning videos with the power of Qwen3-VL vision enhancement and WAN 2.2 video generation.

    Perfect for creators, artists, and professionals looking for the ultimate AI video enhancement tool! ๐ŸŒŸ


    Built with โค๏ธ for the ComfyUI community

    ๐Ÿ”– README

    The new version is out today, with a brand new anime style โœจ

    WORKFLOWS TESTED ON:

    • ComfyUI 0.15.0

    • Python 3.12.12

    • Pytorch 2.9.1 + CUDA 13.0

    ๐ŸŸฃ Credits

    This workflows are intended to be used with the models by taek75799 as they follow the structure of Dynamic Prompts you can find under these models:

    WAN 2.2 Enhanced NSFW | SVI | camera prompt adherence (Lightning Edition) I2V and T2V fp8 GGUF

    ๐ŸŸฃ Other tested models:

    Thanks to all the users who are commenting and helping me improve the workflows โค๏ธ

    ๐Ÿ”– FULL T2V AUTOPROMPT GUFF

    ๐Ÿ›‘ Experimental WF ๐Ÿ›‘

    Start with a T2V prompt and extend the generated video with I2V

    This workflow requires both T2V and I2V Wan 2.2 Models

    ๐Ÿ”– SVI I2V AUTOPROMPT 1.2

    Thanks to taek75799 for his models โค๏ธ

    SVI LORAS:

    LIGHTX2V LoRaS are not included in the model

    ๐Ÿ”– FULL I2V AUTOPROMPT 1.7

    Complete workflow that includes:

    • Long Video Generation [from 5 to 20 seconds]

    • Auto Prompting [Qwen3-VL]

    • Upscale [2xLexicaRRDBNet and Tensorrt]

    • Frame Interpolation [30fps and 60fps for img2vid | 24fps and 50fps for MMAudio]

    • MMAudio [NSFW Unlocked]

    ๐Ÿ“ AUTO PROMPT

    • Prompt Description Box [Multilanguage ITA ENG]: Just write your idea and the LLM will do the rest, formatting the prompt in the dynamic prompt format used in the Wan 2.2 models

    • Final Prompt Preview: Shows up the final prompt

    ๐Ÿ”ถ QWEN3-VL NODE FOR GGUF MODELS ๐Ÿ”ถ

    ๐Ÿ”– To use the Qwen3-VL GGUF Quantized models you have to install llama-cpp-python

    If you are not comfortable in manual install the llama, just go with the normal version inside Full-I2V-LongVideo

    1. ๐Ÿ›‘ STOP COMFYUI

    1. ๐Ÿ“‚ Activate the ComfyUI Virtual Enviroment

    In your ComfyUI root installation folder type:

    on Windows:

    Command Prompt:

    \venv\Scripts\activate.bat

    or Power Shell:

    \venv\Scripts\Activate.ps1

    on Linux:

    . /venv/bin/activate

    If you use ComfyUI Desktop:

    Click on Console and then on Terminal

    1. โฌ‡๏ธ Install llama-cpp-python

    pip install --upgrade --force-reinstall --no-cache-dir "llama-cpp-python @ git+https://github.com/JamePeng/llama-cpp-python.git"

    1. ๐Ÿ”„ Restart ComfyUI and enjoy

    ๐Ÿ“ SWITCH CLIP

    By default the WF uses the GGUF node to load quantized Clip, if you wanna switch to the NSFW Clip model, you have to bypass the GGUF node and connect the other Clip loader to the "Set_CLIP"

    ACCELERATION:

    Triton is disabled by default, you can enable it by opening the first Subgraph

    Inside the workflow you will find all the links to download the models you will need

    That's all, hope you enjoy ^^

    Description

    โฌ†๏ธ Updated on 02/19/26

    ๐ŸŽฌ WAN 2.2 T2V AutoPrompt Story Workflow - Release Notes

    ๐Ÿ“‹ Overview

    Introducing the WAN 2.2 T2V AutoPrompt Story workflow - a complete text-to-video story generation system that creates continuous 20-second videos with intelligent narrative progression. This workflow complements the existing I2V system and enables pure text-driven story creation.

    ๐ŸŽฏ Key Features

    ๐Ÿ“ Text-to-Video Story Generation

    • Pure Text Input: Generate complete video stories from text prompts only

    • 4-Segment Structure: Automatic creation of 4 continuous 5-second video segments

    • Narrative Intelligence: Smart prompt progression ensures story coherence

    • 20-Second Output: Complete story videos with seamless transitions

    ๐Ÿง  Advanced AI Integration

    • QwenVL-Powered: Utilizes advanced vision-language models for prompt generation

    • WAN 2.2 Compatible: Optimized for WAN 2.2 T2V video generation

    • Context-Aware: Maintains story continuity across all segments

    • Style Detection: Automatic visual style adaptation (anime, 3D, photorealistic, etc.)

    ๐Ÿ”„ Workflow Automation

    • One-Click Generation: Simple text input โ†’ complete story video

    • Auto-Split Technology: Intelligent prompt separation for 4 segments

    • Parameter Optimization: Pre-configured settings for best quality

    • Error Handling: Robust fallback mechanisms for reliable operation

    ๐ŸŽจ Use Cases

    ๐Ÿ“– Story Creation

    • Creative Writing: Transform story ideas into visual narratives

    • Content Creation: Generate engaging video content from scripts

    • Concept Visualization: See your stories come to life instantly

    • Narrative Prototyping: Quick story testing and iteration

    ๐ŸŽฌ Production Pipeline

    • Pre-visualization: Storyboard creation with actual video output

    • Content Planning: Visual planning for video projects

    • Creative Exploration: Experiment with different story directions

    • Rapid Prototyping: Fast concept development

    โšก Performance Features

    ๐Ÿš€ Optimized Settings

    • Token Efficiency: Balanced token usage for quality vs speed

    • Memory Management: Optimized for various hardware configurations

    • Context Optimization: Smart context length management

    • Model Compatibility: Works with multiple QwenVL model variants

    ๐ŸŽ›๏ธ Customization Options

    • Style Selection: Choose from multiple visual styles

    • Length Control: Adjustable segment duration and total length

    • Quality Settings: Balance between speed and visual quality

    • Model Selection: Support for different model sizes and types

    ๐Ÿ”ง Technical Specifications

    ๐Ÿ“Š Input/Output

    • Input: Text prompt (any language)

    • Output: 4ร—5-second video segments (20s total)

    • Format: WAN 2.2 compatible video files

    • Resolution: Configurable based on model capabilities

    ๐ŸŽฏ Model Requirements

    • Primary: QwenVL models (4B, 8B recommended)

    • Backend: GGUF or HuggingFace support

    • Memory: Minimum 8GB VRAM recommended

    • Storage: ~2GB for models + workflow files

    ๐Ÿ“ฆ Package Contents

    ๐Ÿ“ Workflow Files

    • WAN2.2-T2V-AutoPrompt-Story.json: Complete T2V story workflow

    • Integration Ready: Direct import into ComfyUI

    • Pre-configured: All nodes and connections set up

    • Documentation: Built-in node descriptions and tooltips

    ๐Ÿ”ง Dependencies

    • ComfyUI-QwenVL-Mod: Custom node integration

    • WAN 2.2 Nodes: Video generation backend

    • Story Split Node: Intelligent prompt processing

    • Utility Nodes: Text processing and formatting

    ๐Ÿš€ Installation & Setup

    ๐Ÿ“ฅ Quick Start

    1. Install ComfyUI-QwenVL-Mod custom node

    2. Download WAN 2.2 T2V models

    3. Import the workflow into ComfyUI

    4. Load your preferred QwenVL model

    5. Input your story text and generate

    โš™๏ธ Configuration

    • Model Selection: Choose appropriate QwenVL model

    • Quality Settings: Adjust based on hardware capabilities

    • Style Preferences: Configure visual style options

    • Output Settings: Set resolution and format preferences

    ๐ŸŽฏ Best Practices

    ๐Ÿ“ Prompt Writing

    • Clear Descriptions: Detailed scene and character descriptions

    • Progressive Narrative: Clear story progression across segments

    • Visual Details: Include specific visual elements and styles

    • Emotional Context: Add mood and atmosphere descriptions

    โšก Performance Tips

    • Model Selection: Use 4B models for faster generation

    • Context Management: Adjust context length for memory efficiency

    • Batch Processing: Generate multiple stories in sequence

    • Quality Balance: Find optimal settings for your hardware

    ๐Ÿ”ฎ Future Updates

    ๐ŸŽฌ Planned Features

    • Extended Length: Support for longer stories (8+ segments)

    • Style Templates: Pre-configured visual style presets

    • Batch Generation: Multiple story generation in parallel

    • Advanced Controls: Fine-grained parameter control

    ๐Ÿ”„ Integration Plans

    • Cloud Support: Direct cloud deployment options

    • API Integration: Programmatic access to workflow

    • Mobile Support: Optimized for mobile deployment

    • Collaboration: Multi-user workflow sharing

    ๐Ÿ“ž Support & Community

    ๐Ÿ’ฌ Getting Help

    • Documentation: Complete setup and usage guides

    • Community Forum: User discussions and sharing

    • Bug Reports: Issue tracking and resolution

    • Feature Requests: Community-driven development

    ๐ŸŽจ Community Showcase

    • Gallery: User-generated story examples

    • Tutorials: Step-by-step workflow guides

    • Tips & Tricks: Advanced usage techniques

    • Collaboration: Community project opportunities

    ๐ŸŽ‰ Summary

    The WAN 2.2 T2V AutoPrompt Story workflow represents a breakthrough in automated video storytelling, combining advanced AI technology with intuitive workflow design. Whether you're a content creator, storyteller, or video producer, this workflow provides the tools you need to transform your ideas into compelling visual narratives.

    Key Benefits:

    • ๐ŸŽฌ Complete Stories: 20-second narratives with 4 segments

    • ๐Ÿง  AI-Powered: Advanced prompt generation and progression

    • ๐ŸŽจ Visual Richness: Multiple style options and high-quality output

    • โšก Easy to Use: One-click generation from simple text input

    • ๐Ÿ”ง Flexible: Customizable for various needs and hardware

    Transform your stories into stunning videos with the power of AI - try the WAN 2.2 T2V AutoPrompt Story workflow today!

    Part of the ComfyUI-QwenVL-Mod ecosystem - Advanced AI tools for creative video generation.

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    355
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/15/2026
    Updated
    2/27/2026
    Deleted
    -

    Files

    WAN22NSFWI2VT2VWorkflowsAutoPrompt_oneclickT2VStory.zip

    WAN22NSFWI2VT2VWorkflowsAutoPrompt_OneclickT2VI2VStory.zip

    WAN22NSFWI2VT2VWorkflowsAutoPrompt_OneclickT2VI2VStory.zip