Read our Quickstart Guide to Mochi on the Civitai Education Hub!
If you don't want to run it locally, you can try it out now on the Civitai Generator! Read the Guide to Video Generation in the Civitai Generator!
Mochi 1 preview, by creators https://www.genmo.ai, is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation.
This model dramatically closes the gap between closed and open video generation systems.
The model is released under a permissive Apache 2.0 license.
To get started with ComfyUI;
Update to the latest version of ComfyUI
Download Mochi model weights into
models/diffusion_modelsfolderMake sure a text encoder [1][2] is in your
models/clipfolderDownload the VAE to:
ComfyUI/models/vae
Mochi has native ComfyUI support, and will run on 12GB+ VRAM.
Github: https://github.com/genmoai/models
HuggingFace: https://huggingface.co/genmo/mochi-1-preview
Description
FAQ
Comments (11)
Are there any plans to include this in the onsite generator?
Edit: yes.
We'll see. Generation times are still pretty high, even on top-end cards. It's more likely we'll see other generation services first; Haiper, Kling, etc.
Yeah, I would to see these sort of things added to the site as well
Well, seems like they listened to your request, it has just been added to the generator!
moshi moshi doki doki
puni puni waku waku
They have plans on img2vid or is it just txt2vid?
They do! And running locally you already can, but it's tricky to get working!
@theally any place where I can see the setup for img2video using this? This is amazing as is already btw!
@Nibot Keep an eye on the guide; https://education.civitai.com/civitais-quickstart-guide-to-mochi-1-text2video/ - I'll try to expand this over the next week to include these details!
@theally Thank you! I am having fun with this setup!
