Design motion paths to animate still photos into videos.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
Wan 2.1 Trajectory Control is a specialized workflow within the Wan 2.1 Fun family that enables motion-controlled video generation from a single input image. By drawing or defining a trajectory path, users can guide the camera or subject movement across frames to produce cinematic effects, animations, or dynamic storytelling scenes. This method unlocks powerful control over motion planning while keeping the artistic direction intact. Built for the Wan 2.1 Fun model ecosystem, this workflow makes high-quality AI video synthesis accessible and intuitive.
Important nodes:
Positive Prompt:
Negative Prompt:
Notes
Wan 2.1 Fun Motion Control | AI Photo-to-Video Animation — see RunComfy page for the latest node requirements.
Description
Initial release — Wan2.1-Trajectory.
