CivArchive
    Preview 1

    Description

    MAGI-1: Autoregressive Video Generation at Scale

    We present MAGI-1, a world model that generates videos by autoregressively predicting a sequence of video chunks, defined as fixed-length segments of consecutive frames. Trained to denoise per-chunk noise that increases monotonically over time, MAGI-1 enables causal temporal modeling and naturally supports streaming generation. It achieves strong performance on image-to-video (I2V) tasks conditioned on text instructions, providing high temporal consistency and scalability, which are made possible by several algorithmic innovations and a dedicated infrastructure stack. MAGI-1 further supports controllable generation via chunk-wise prompting, enabling smooth scene transitions, long-horizon synthesis, and fine-grained text-driven control. We believe MAGI-1 offers a promising direction for unifying high-fidelity video generation with flexible instruction control and real-time deployment.

    Details

    Downloads
    3
    Platform
    ShakkerAI
    Platform Status
    Available
    Created
    4/25/2025
    Updated
    5/7/2025
    Deleted
    -

    Files

    24B_distill.zip

    Mirrors

    ShakkerAI (1 mirrors)