Adjusted for 2D animation
"anime style"
sample: 432x768 49fDescription
FAQ
Comments (43)
So it is a finetuned wan checkpoint? Seems a lot of potential!
Any chance to have a I2V version?
Ho, is this related to anisora or did you just finetune it yourself? Or perhaps it is a merge?
This is completely unrelated to Anisora.
That looks very promising !
I'm wondering if you could to extract the finetune so we can use it as a lora ?
GGUF please?
The 14B T2V is 480p or 720p?
Does the model have any character data added with the learning data?
Looking forward to quantized GGUF versions.
done
How did you do it? I thought finetune wasn't possible on Wan, since by now there should be way more of them, just like with XL or Illustrious models that come out every day.
Wan isn't impossible to finetune since it isn't even a distilled model and there is a training code and everything: https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/wanvideo - this existed from the very beginning, then community added its own ways to train too.
But it's so much more demanding in comparison to SDXL and its anime variations, so of course SDXL would have more community models.
It also says "checkpoint merge", though, which can also mean a merge with LoRA.
Yes, there is way to make loras and then merge them to checkpoint. It's common way to finetune without resources needed for full checkpoint tuning.
It's like how most SDXL variations made.
i2v 720p?
Excellent work! Can you fine tune the i2v 720p model?
just today
Gguf please, I dont think my 3070 8gb VRAM can take this
3060 12GB VRAM handles it
@guljaca聽You must be referring to the 1.3b version.
done
@xnapx聽Nope, 14b
@hakoniwa聽please do gguf of I2V version 480p
@hakoniwa聽Thank you, could you do I2V GGUF too?
@guljaca聽Would you be able to provide a workflow that achieves that? I'm running out of vram even on minimal test setups.
@xnapx I can do a 5-second video in 4 minutes (24 fps and 28 steps, 480x832), I used the nsfw workflow here on this website
@Yourmomd聽What is "the nsfw workflow here on this website"? You didn't put a link. Also, are you sure you're referring to a local setup with a 3060 12GB with those numbers? Because a generation like that usually takes significantly longer than that. Edit: Disregard my comment. I see that you have a 3070. Apologies for the confusion!
@xnapx聽Try running it with the '--lowvram' command line argument. It's likely that you're running out of video memory. After all, 12GB is not the same as 8GB. On an RTX3060, generating a 2-second video at 20 steps takes 20 minutes. Surprisingly, the model even works on a CPU.
@guljaca聽I had completely forgotten about that option after switching to the 3060. Thanks for the reminder!
@xnapx聽Its even weirder cause my 3070 only has 8gb vram but its still fast. Here is the link to the workflow WAN Video Workflow NSFW (TeaCache, SAGE, LoRas, Notes) - v1.1 | Wan Video Workflows | Civitai
Works well with drawings, but the model is too animated. When the prompt includes 'dynamics', the character will act like they're on speed and twitch uncontrollably.
flf version please, i want to make some loop video as wallpaper
how to do looping videos?
@rodigo000333聽use the same start and end image, Itv will have strange flickering, must to use flf2v
@MTT0731聽oh I see that makes sense.
Do I do img to video with same image for start and end frame get the video with flickering then put it in flf2v?
Or do I use flf2v right off the bat? What is flf2v? Thank you! 馃檹 馃槉
@rodigo000333聽It's the checkpoint of the Wan Video, the original name: Wan2.1-FLF2V-14B-720P
@MTT0731聽oh so the author needs to train this ani_wan dataset on a different base model checkpoint that is wan 2.1 FLF2V?
@rodigo000333聽yes! This is exactly what I want!
does it work well with none "anime" animation?

