One image in, blockbuster push-in shots out. Zero complexity.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
Transform your AI videos from static to cinematic with this Push-In Camera - A Motion LoRA for Wan 2.1 workflow. Trained on 100 real film clips through 40+ iterations, this LoRA adds professional push-in drone camera movements that breathe life into your generations. Just load your image and hit run - the workflow handles everything automatically, delivering Hollywood-quality push-in shots that sweep from establishing views to dramatic close-ups. No complex prompts needed, just pure cinematic motion.
Important nodes:
Push-in camera
Notes
Push-In Camera - A Motion LoRA for Wan 2.1 — see RunComfy page for the latest node requirements.
Description
Initial release — Motion-Push-In-Cam.
