CivArchive
    Wan2.1-Fun-1.3B-InP i2v fp16 and fp8e4m3fn - fp8e4m3fn and fp16
    Preview 68928256Preview 68927907Preview 68928074

    This is a reupload of https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP including a fp8 conversion for people who can't run the 1.3b model in 16 bit precision.

    Wan 2.1-Fun-1.3B-InP is an img2vid wan model at 1.3 billion parameters, it was trained by Alibaba-PAI. Initialized from the 1.3b t2v model. The weights are similar to the 14b i2v models, but with the size of the 1.3b model. Making it an easy to run, but still good quality i2v model. It was trained for start and end frame inpainting. Setting just a start frame allows it to do i2v. Wan 14b workflows

    Lora training

    If you want to use diffusion-pipe for lora training, you can use my fork. Make sure you're on the patch-1 branch. There's also an open pull request for it to be merged into the main repository.

    git clone --recurse-submodules https://github.com/gitmylo/diffusion-pipe -b patch-1

    The pr has been merged, so regular diffusion pipe can be used now:

    git clone --recurse-submodules https://github.com/tdrussell/diffusion-pipe

    Description

    The fp16 model, original from huggingface https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP/blob/main/diffusion_pytorch_model.safetensors. And a converted fp8e4m3fn version (make sure you download the one you want.)

    Checkpoint
    Wan Video

    Details

    Downloads
    2,355
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/9/2025
    Updated
    10/6/2025
    Deleted
    -