Wan 2.7 will become available on the Civitai Generator soon!
Original details from: https://mp.weixin.qq.com/s/Nyow0Ht8J0yyClYTwUCU7w?scene=1&click_id=8
Overview
Wan 2.7 Image is a next-generation text-to-image and instruction-based editing model from Alibaba’s Wan series, designed to address long-standing issues in AI image generation such as repetitive faces, unstable text rendering, and poor control over color and layout.
Rather than just improving visual quality, Wan 2.7 focuses on precision and reliability. It introduces stronger prompt understanding, more stable outputs, and fine-grained control over key elements like identity, typography, and composition.
Key Improvements
Breaks the “Same Face” Problem
A major focus of Wan 2.7 is improving character distinctiveness. Traditional models tend to generate similar-looking faces (“one face syndrome”), especially in stylized or anime outputs.
Wan 2.7 allows much deeper control over facial structure and features, down to details like eye shape and proportions. This makes it possible to create recognizable, varied characters instead of repeating a generic “AI face.”
Accurate Text Rendering (Even Long Content)
One of the biggest upgrades is text generation.
Wan 2.7 supports:
Long-form text (4000+ characters)
Multiple languages (including Chinese, English, Japanese, Korean, and more)
Structured content like tables and formulas
It specifically targets a long-standing issue where generated images contain garbled or nonsensical text, especially in corners or dense layouts. This makes it far more usable for posters, UI designs, and content with real typography.
True Color Control (Not Guesswork)
Wan 2.7 introduces direct color palette control, including:
Exact hex color input (e.g.
#2C3E50)Reference-based color extraction
Control over color proportions and balance
Previously, models would “approximate” colors. Wan 2.7 instead allows brand-accurate outputs, making it viable for design workflows where color consistency matters.
Advanced Editing Capabilities
Precise Region-Based Editing
Wan 2.7 supports localized editing, where you can target a specific area of an image and modify only that region.
For example:
Adjust the size or position of a feature
Add or remove elements
Change layout without affecting the rest of the image
This avoids the typical “regenerate everything and hope it works” loop.
Instruction-Based Image Editing
You can edit images using natural language instructions such as:
“Add a logo in the top-right corner”
“Change text alignment to left”
“Add a hat to the character”
The model understands what to change and what to preserve, making iteration much faster and more predictable.
Multi-Image Consistency
Wan 2.7 supports generating up to 12 images in a single batch, with built-in consistency across:
characters
style
scene progression
This is particularly useful for:
storyboards and comics
multi-scene narratives
product variations
Instead of treating each image as independent, Wan 2.7 enables coherent visual sequences.