CivArchive
    Qwen Next Scene LoRa - Qwen Next Scene
    Preview 116657770
    Preview 116655543
    Preview 116657556
    Preview 116657536
    Preview 116657548
    Preview 116657601
    Preview 116657663
    Preview 116658061
    Preview 116657741
    Preview 116655595
    Preview 116657760
    Preview 116657764
    Preview 116658372
    Preview 116655610
    Preview 116657100
    Preview 116657916
    Preview 116657915
    Preview 116657917
    Preview 116657913
    Preview 116657982

    🎨 Complete Guide: Next Scene Generation from Images in ComfyUI

    This workflow enables the creation of sequential scenes (next scene generation) based on an input image while preserving composition, style, and atmosphere. Perfect for creating comics, storyboards, book illustrations, and visual storytelling. Developed on the Qwen Image Edit foundation using specialized LoRA adapters for ultra-fast generation in just 4 steps!

    📦 Required Components Before Starting

    1. Required Models (mandatory!)

    Download and place in the specified folders:

    📂 ComfyUI/

    └── 📂 models/

    ├── 📂 diffusion_models/

    │ └── qwen_image_edit_2509_fp8_e4m3fn.safetensors

    ├── 📂 loras/

    │ ├── Qwen-Image-Lightning-4steps-V1.0.safetensors

    │ └── next-scene_lora_v1-3000.safetensors

    ├── 📂 vae/

    │ └── qwen_image_vae.safetensors

    └── 📂 text_encoders/

    └── qwen_2.5_vl_7b_fp8_scaled.safetensors

    Download Links:

    2. Hardware Requirements

    • Minimum: GPU with 8 GB VRAM

    • Recommended: GPU with 12+ GB VRAM (RTX 3080/4080 or better)

    • Disk space: ~3.5 GB for all models

    • RAM: 16+ GB


    🛠️ Step-by-Step Setup and Usage Instructions

    Step 1: Import workflow into ComfyUI

    1. Save the JSON code to a file named next_scene_generation.json

    2. Open ComfyUI in your browser

    3. Right-click on empty space → Load → select the saved file

    4. The workflow will automatically load with correct settings

    Step 2: Upload Source Image

    1. Find the group "Step 2 - Upload image for editing" (left side of interface)

    2. Click on the LoadImage node (pink block)

    3. Press the 📁 Choose an image to upload button

    4. Select an image from your computer

      • Supported formats: PNG, JPG, JPEG, WEBP

      • Recommended sizes: from 512×512 to 1280×720 pixels

      • Tip: The higher the quality of the source image, the better the result

    Step 3: Load Models

    1. Verify all models are correctly installed in ComfyUI folders

    2. In the "Step1 - Load models" group, ensure:

      • In the UNETLoader node, model qwen_image_edit_2509_fp8_e4m3fn.safetensors is selected

      • In the CLIPLoader node, select:

        • Model: qwen_2.5_vl_7b_fp8_scaled.safetensors

        • Clip Type: qwen_image

      • In the VAELoader node, model qwen_image_vae.safetensors is selected

    3. If any models aren't loaded, restart ComfyUI after installation

    Step 4: Configure LoRA Adapters (critically important!)

    1. In the first LoraLoaderModelOnly node:

      • Select Qwen-Image-Lightning-4steps-V1.0.safetensors

      • Strength Model: 1.0 (maximum influence)

    2. In the second LoraLoaderModelOnly node:

      • Select next-scene_lora_v1-3000.safetensors

      • Strength Model: 0.8 (recommended value for balance)

    3. Important: Do not change the order of LoRA adapter application! First Lightning, then next-scene.

    Step 5: Craft Your Prompt (key step!)

    1. Find the TextEncodeQwenImageEditPlus node (blue block)

    2. In the prompt input field, enter text starting with "Next Scene:"
      Example of a correct prompt:

      Next Scene: The camera holds a tight close-up on the man's face as he lies in bed, eyes open. His head rests near the lower right corner of the frame, with the pillow sketched in soft, curved lines. The rest of the canvas is empty, emphasizing the quiet of morning.


      Rules for effective prompts:

      • Always start with "Next Scene:" (mandatory requirement for LoRA)

      • Specify camera direction: "close-up", "wide shot", "medium shot", "low angle"

      • Describe lighting: "morning light", "dramatic shadows", "soft golden hour glow"

      • Indicate atmospheric changes: "light morning mist", "gentle breeze moving the curtains"

      • Maintain continuity: mention elements from the source image

      • Use specific details: character poses, object placement, emotions

    3. For negative prompt (in the CLIPTextEncode node):

      • Leave empty for basic version

      • Or add: deformed, blurry, low quality, distorted perspective, extra limbs

      • Recommended CFG value: 1.0 (very low due to LoRA specifics)

    Step 6: Verify Image Dimensions

    1. The GetImageSize node will automatically determine dimensions of your source image

    2. The EmptySD3LatentImage node creates latent space of the same size

    3. Important: If you want to change aspect ratio, manually replace width/height values in the EmptySD3LatentImage node, disconnecting the input links

    Step 7: Configure Generation Parameters

    1. Find the KSampler node (orange block)

    2. Check key parameters:

      • Steps: 4 (do not change, this is optimal value for LoRA)

      • CFG: 1.0 (very low, but critical for quality)

      • Sampler: euler

      • Scheduler: simple

      • Noise Seed: randomize (or specify a number for reproducibility)

    3. Warning from workflow: Do not arbitrarily increase Steps and CFG values! This will disrupt LoRA adapter functionality.

    Step 8: Start Generation

    1. Click the QUEUE PROMPT button (top right corner of interface)

    2. Monitor progress in the ComfyUI console:

      • First models load (~10-15 seconds)

      • Then generation runs (typically 5-15 seconds on RTX 3090/4090)

    3. Finished image appears in the SaveImage node on the right side

    4. Result is automatically saved to folder ComfyUI/output/ComfyUI/


    🎯 Tips for Achieving Best Quality

    1. Prompting Rules for Next Scenes

    For maximum continuity between frames:

    • Start with camera description:
      "Next Scene: The camera slowly pulls back to reveal..."
      "Next Scene: Switching to a low angle shot, we see..."

    • Preserve key elements:
      "Next Scene: Maintaining the same warm sunset lighting, the character now stands near a window..."

    • Specify character movement:
      "Next Scene: The woman has turned her head slightly to the left, her expression now showing concern..."

    • Avoid radical changes: don't completely change setting, lighting style, or angle unnecessarily

    2. Hardware Optimization

    • For weak GPUs (8 GB VRAM):

      • Reduce source image resolution to 768×512

      • Disable preview in the SaveImage node

    • For powerful GPUs (24+ GB VRAM):

      • Can increase resolution to 1280×720

      • Use batch size=2 for parallel generation of variants

    3. Common Problems and Solutions

    • Problem: Blurry or fuzzy results
      Solution: Ensure you're using correct VAE (qwen_image_vae.safetensors), check source image quality

    • Problem: No connection between source image and result
      Solution: Strengthen description in prompt, adding specific details from source frame

    • Problem: Artifacts or distortions
      Solution: Reduce strength of second LoRA to 0.6-0.7

    • Problem: "CUDA out of memory"
      Solution: Reduce image resolution or restart ComfyUI to clear memory

    4. Advanced Techniques

    • Creating mini-animations (3-5 frames):

      1. Generate first next scene

      2. Save result and upload as new source image

      3. Modify prompt, adding movement description: "Next Scene: Continuing the movement, the character now..."

    • Style blending:

      • Add to prompt style instructions: in the style of impressionist painting, with cinematic bokeh

      • Experiment with second LoRA strength (0.5-1.0) to control style influence

    • Creating panoramic scenes:

      • Use prompt: "Next Scene: The camera pans horizontally to reveal..."

      • Post-process generated image through panorama stitching tools

        🌟 Examples of Effective Prompts

        For angle change:

        Next Scene: The camera pulls back to a medium shot, revealing the character standing on a cliff edge overlooking a misty valley at dawn. The morning sun casts long shadows behind him, highlighting the texture of his coat. The composition maintains the same left-to-right balance as the previous scene.

        For continuing action:

        Next Scene: Following the character's movement, we now see him stepping through an ancient stone archway into a sunlit garden. The camera angle remains consistent, but the lighting shifts to warm afternoon tones. Butterflies flutter around blooming flowers in the foreground, maintaining the soft dreamlike atmosphere of the previous frame.

        For time of day change:

        Next Scene: The scene transitions to night time. The same character sits by a window, but now illuminated only by candlelight and moonbeams. Outside, stars are visible in the clear night sky. The camera maintains the same close-up framing, but the color palette shifts to deep blues and warm yellows, preserving the emotional tone of quiet contemplation.

    🚀 Conclusion and Best Practices

    This workflow represents a revolutionary approach to visual storytelling creation. Here are the key principles for success:

    1. Source image quality is critical – use clear, well-lit photos

    2. Prompting is an art – practice writing detailed, continuous descriptions

    3. Respect technical limitations – 4 steps and CFG=1.0 aren't arbitrary, they're optimum for LoRA

    4. Experiment with lighting and angles, not radical content changes

    5. Save intermediate results for creating sequential stories

    Pro tip: For creating full comics or storyboards:

    • Generate 3-5 sequential frames

    • Export them to software like Photoshop or Canva

    • Add speech bubbles, transitions, and color correction

    • Save as PDF or PNG sequence for animation


    🎉 Congratulations! You now possess a powerful tool for generating visual stories. Start with simple scenes and gradually increase complexity. Remember: best results come from deep understanding of workflow capabilities and meticulous prompt crafting.

    P.S. Don't forget to experiment! Try uploading screenshots from favorite movies and imagine what the next frame of this story would look like. Inspiration often comes during the creative process! ✨🎬

    Description

    Workflows
    Qwen

    Details

    Downloads
    43
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/7/2026
    Updated
    1/9/2026
    Deleted
    -

    Files

    qwenNextSceneLora_qwenNextScene.zip

    Mirrors