Description:
This workflow allows you to generate video from a base image and a text.
You will find a step-by-step guide to using this workflow here: link
My other workflows for WAN: link
Resources you need:
📂Files :
For base version
I2V Model : 480p or 720p
In models/diffusion_models
For GGUF version
I2V Quant Model :
- 720p : Q8, Q5, Q3
- 480p : Q8, Q5, Q3
In models/unet
Common files :
CLIP: umt5_xxl_fp8_e4m3fn_scaled.safetensors
in models/clip
CLIP-VISION: clip_vision_h.safetensors
in models/clip_vision
VAE: wan_2.1_vae.safetensors
in models/vae
Speed LoRA: 480p, 720p
in models/loras
ANY upscale model:
Realistic : RealESRGAN_x4plus.pth
Anime : RealESRGAN_x4plus_anime_6B.pth
in models/upscale_models
📦Custom Nodes :

Description
Hotfix :
LoRA problem fixed,
CFG zero star fixed.
FAQ
Comments (141)
CFGZeroStarAndInit not found ?, How to manage fix this ?
The CFGZeroStar node wasn’t found for me until I updated my ComfyUI via the manager and the comfyui dependencies bat in the update folder.
Cfgzerostar is a part of « Kjnodes » you have to update it
@UmeAiRT I'm having this problem too, but only "CFGZeroStarAndInit" exists in the Kjnodes after updating
@UmeAiRT I did, I update Kjnodes but still missing CFGZeroStar.
i can confirm tha in the latest 1.0.8 version the CFGZeroStarAndInit node is missing you have to switch version to nightly to get the node to work
@tawaifactory8133780 I switched to Nighlty but just like CyberAlmania I'm now just and only missing CFGZeroStar
@thisisarandomaccount2025 CFGZeroStar is a comfy core node so make sure your comfy is up to date... as of today v0 .3 .27
@tawaifactory8133780 Using ComfyUI desktop and it's set to automatic updates. It should be update. But I'll see if I can "force" it somehow
No dice. ComfyUI desktop version seems to be "behind" the portable install and there's no way to update it easily it seems. Guess I'll just have to wait
@thisisarandomaccount2025 Same issue,I'll have to wait too.
Might be a silly question, but is there anyway to manually type a custom resolution in. The graph doesn’t have a specific resolution I use
Edit: Nvm figured it out, wow lol
You can double click and manually enter the numbers. Took me a minute too.
Im still getting a flash in the video generation, can anyone help me figure out how to get rid of it ??
I ran tests on three GPUs: L40S, 4090, and Mac M4 max.
I no longer have any issues with the first two, but the third, less powerful one still has this issue on certain renderings.
@UmeAiRT im running a 4090, would you be so kind as to tell me what im controlling with steps, CFG and shift please ?
I suggest adding comfyui-multigpu custom node and use UnetLoaderGGUFDisTorchMultiGpu to load the model, it allows to set virtual_vram_gb, thanks to that I am able to run i2v-14b-480p-Q8_0 on RTX 4080 S 16GB VRAM
Works great! Are there issues pushing the video past the 5 second mark?
[SOLVED] Hi :D , with the newer 1.8 , i got the CFGZeroStar node missing even tho it's fully updated « Kjnodes » at 1.0.8 version, any clue ?
change to version to nightly then it will work
@tawaifactory8133780 Already tried, but still show the missing node error :(
@Mario1964 strange!! it worked for me, did you restart comfy and refresh the workflow screen
@tawaifactory8133780 yup, restarted and refreshed too
@Mario1964 is the console showing any errors?
@tawaifactory8133780 nope, also tried right now to install the nightly biuld manually but i saw only the "CFG Zero Star/Init" and missing the "CFG Zero Star"
@Mario1964 CFG Zero Star is a comfy core node is your comfy up to date.... as of today v0 .3 .27
@tawaifactory8133780 yup, i think i might reinstall everything hahahah
@Mario1964 good luck
@tawaifactory8133780 yup clean install always work :D
Yes, i also needed reainstall all to make it work
+1 to the clean node reinstall to get CFGZeroStar working
I feel like my Loras aren’t activating all way and I’m only getting a little bit of movement in my videos and it’s not reading the prompt correctly. Guess that means I should enable CFG zero star to get more movement and adherence or turn the CFG up?
Yes, I was going to tell @UmeAiRT this yesterday. The v1.8 has a different flow thru the Lora Stack maybe due to the CFG zero star change, my loras were not really activating right either. Previously, the Apply Lora Stack went to the CFGGuider but now the CFG zero star is connected there instead.
It seemed to me that the install scripts was not following along with your 1.7 version. Whats the status on 1.8? Are all your tools aligned to this version? (Eternally grateful for your work! Just need to know when to install the new version. I apparently didnt have the skills myself to make 1.7 work. all the Florence models conflicted for me etc.)
I have a conflict too, I just disable the entire group of nodes with Florence and write the prompt myself
I will try to update my scripts this evening but being on a business trip it is not easy
@UmeAiRT You are awesome! Simpy awesome! <3
Since you have all these skills, could you maybe invent some kind of prompt scheduling for WAN? According to ChatGPT (who we all know, knows all there is to know in the universe), says that it is supposed to accept a scheduling similar to AnimateDIFF? If so, it would be amazing to get that kind of control over the narrative of the video...
@jay_rich To create a more complex story I use the "video extension" workflow and change the prompt for each sequence
@UmeAiRT Ok.. yes... wanted to look at that, and it is a cool feature... But still... Would be awesome to micro manage the video... Like, "at this frame the wind blows, and at this frame it stops" etc. ChatGPT (who knows all) says that wan should be able to understand prompt scheduling.. but maybe it is just BS... :)
Thank you for all the workflows and guides. Your work is truly invaluable.
Is there a workflow specifically for interpolation and upscaling separately? This way, I could generate videos faster with interpolation and upscaling turned off and then apply these processes only to selected clips.
I don't know why they keep reminding me of this mistake.
'''
got prompt
WARNING: SystemNotification.IS_CHANGED() missing 1 required positional argument: 'self'
'''
Me too
When I was trying the CFGZero feature. I was getting flashes in my video generation, the output was trash, and Loras was not working. I think it's due to an issue in your latest workflow. I have changed the model flow in the workflow in the following way.
Apply Tea Cache -> skip layer -> CFGZeroStar -> CFG Zero init -> Apply Lora Stack
After these changes, everything worked fine. I even activated CFGZero and skip layer, and the result was good. I also noticed reduced generation time in my low-VRAM laptop.
Yes, I noticed this problem and I will publish a hotfix this evening.
Thanks for the workflow!
What does Sage Attention add to the generation? Speed? Consistency? Is it a trade off in some way?
It's a bit complicated to install but it improves the generation speed
@UmeAiRT can you help me with cfgzerostar install? idk what i'm doing wrong lolol :/, ilysm!
Triton is required for SageAttention to work properly. There are two options here:
1️⃣ Standard Installation:
Open a Command Prompt (Win + R → type cmd → hit Enter).
Navigate to your ComfyUI folder using the command:
cd path\to\ComfyUI_windows_portable
(Replace path\to\ComfyUI_windows_portable with your actual folder location.)
Run the following command to install Triton:
python_embeded\python.exe -m pip install -U triton-windows
Note: If the package “triton-windows” isn’t found on PyPI, download the appropriate wheel—e.g., triton-3.2.0-cp312-cp312-win_amd64.whl—from here, place it in your ComfyUI folder, and install it with:
python_embeded\python.exe -m pip install path\to\triton-3.2.0-cp312-cp312-win_amd64.whl
(Replace path\to\... with the actual file path.)
2️⃣ Pre-release Version (if needed):
If you need the pre-release version (Triton 3.3), use:
python_embeded\python.exe -m pip install -U --pre triton-windows
👨🏾💻Step 3: Install Sage Attention
Run this command in the same Command Prompt window:
python_embeded\python.exe -m pip install -U sageattention
Note: If you see a deprecation warning about loading an egg (e.g., sageattention-2.0.1-py3.12-win-amd64.egg), it’s recommended to first upgrade pip:
python_embeded\python.exe -m pip install --upgrade pip
Then force a reinstall if necessary:
python_embeded\python.exe -m pip install --force-reinstall sageattention
Hey dude, the workflow is not displaying previews for 'preview' 'output''interpolated output' and 'upscaled output', also comfy is now only showing the last frame saved in the que, switching the que to flat view shows giff file names/times but no preview? I've never encounted this before with any of your other workflows, any help?
Hello, It's very strange, which version of the workflow are you using?
@UmeAiRT Last, and I have the same thing. Everything works fine, except previews. Upd.: It works when upscale enabled.
Right, it appears to have fixed itself. Issue was with v1.8.0. Updated to the newer v1.8.1, I updated comfy, checked what was missing in the latest workflow, CFGZeroStar was once again 'missing', I used your custom clean install node script, then installed the missing parts not installed by the script eg florence2. CFGZeroStar error no longer present (I think the clean node script will need ran everytime I update comfy) and previews loading as normal WITHOUT upscale and Previews are loading as normal WITH upscale enabled. Previews in the Que tab of comfy are also back to normal. Just worth noting, your workflow was the only one being used at the time of these errors and the only custom nodes loaded as prerequisit to your workflow. I would bet that the issue is with nodes/updates out of your control, your workflows are great :D PS: specs 8gb 3050, i7 4790 @4ghz 16gb ddr3, gen time: 1092s (24 frames), ill make a 3 second and submitt it for your gallery, I really appreciate your work, thank you!
just posted my first IMG 2 VID . im still struggling with base image resolution and then what resolution to put within the workflow, but i have something at least haha
There's a beginning to everything and it's normal to have trouble at first. If you need help, don't hesitate to ask me or the rest of the community will try to help you.
@UmeAiRT I'm just unsure on what i should have my base image resolution set at. My pc has a rtx 4080 . Iv also got 64gb ram. With a i9 10900k cpu and I'm just having to go really low within the workflows size setting to get some kind of result lol
@djacid Don't hesitate to stay on the resolution that I set by default and use an upscaler afterwards to double it.
Missing Node Types
When loading the graph, the following node types were not found
CFGZeroStar
You dont have the last nightly version of comfyui
@UmeAiRT where can I get it? I updated all from manager
@gastonarte I had to click the update Comfy & Python dependencies bat in the update folder of my Comfy install to get it.
@rockwellsteven22988 ty!
in manager click switch comfy to nightly, update, then manually refresh your browser window when its complete
My first try at WAN. Workflow seems solid.
I have a 4090 FE with 32GB RAM.
If I try go from 48 frames to like 96 frames the upscale always fails. Even when I change it from 2x to 1.5x. I get some sort of out of memory warning.
Any ideas? Is it worth going to Sage/Triton?
I'll upload what I made so far. Think one or two are upscaled.
Have some buzz too mate! Cool workflow!
Mine use I2V Q5_K_M.gguf to lower VRAM usage. And do not lot too many loras, 480*720p 96 frames is completely affordable on 4090 when clip, clip_h, vae and florence2 loaded on it.
For best experience, skip upscale inside comfy, use topaz video ai for that instead
I had the same issue, i was able to fix it by swapping the sequence so that upscaling happens first and then interpolation after. Has nothing to do with vRAM, the upscale images node is not RAM efficient.
@TopazStudio I created two mini workflows to do interpolation and upscaling in post-production. When I get back from my trip, I'll look for a more optimal upscaler.
@UmeAiRT awesome, I was thinking of doing that myself, so that I can prune out the bad/unwanted generations before sending them through an upscaler/interpolator
Much appreciated for the feedback guys!
Workflow is really nice.
@zxxz4531654 Ill give it a try. Noted.
I tried @rtveate011105194 suggestion and not going to lie so far the results are really good and takes seconds.
@UmeAiRT @TopazStudio Thanks for the feedback and look forward to your update mate. :)
@NyxxiNyx I changed the upscaler to avoid this problem in the new beta of this workflow:Development Workshop: WAN2.1 IMG to VIDEO | Civitai
@UmeAiRT Nice! Been away but back now. Will give it a go! Cheers!
How would I add that node to set the lora blocks?
like what would I connect it to?
LORAs not working. I think they need to be applied to CLIP?
Minor suggestion.
Florence2Run seed is not tied to the workflow seed, this may cause confusion. Personally I have been adding rgthree seed to the workflow, I then connect that to noise seed and Florence2Run Seed. I recommend you try rgthree seed node.
I fixed the Florence seed in the latest beta of the workflow: Development Workshop: WAN2.1 IMG to VIDEO | Civitai
I can't use v1.8 simple because it tells me a missing node that I can't find. here's the name; CFGZeroStar
Try updating your python dependecies or changing kjnodes to its nightly version. That solved it for me.
For those who are getting the weird color shift/glitch at around 60 frames - Swap the tiled VAE decoders out and use the "full decoder"... Literally took me hours to figure out the culprit.
Works great even on 12 GB VRAM. Thank you very much! Can you add last frame or loop option for this workflow?
How do you increase the length of the video generated?
Just set "frame" slider to more frames, or use the simple version with a time selector
FYI 80 frames = 5 Secs
Enabling CFGZeroStar and Skip Layer produces unwanted results. I tried the same image and prompt with them on and off, where off is producing good outputs, with them on the video is drastically different with glitches and video warping. Is there a different set of settings to use with CFGZeroStar selected? I know it's in an testing stage, just wondering what the current optimal settings are or if it's intended to be used with the default settings of the workflow. I can see other commenters have found their own solutions to similar problems, but I can't seem to get any of those to work.
Also a side note, the Automatic Prompt node does not get updated with the Florence2 generation. I can see in the cmd window that Florence2 is working but it is not apparent in the workflow.
Thanks!
Am I the only one that is having this issue?
Failed to find the following ComfyRegistry list. The cache may be outdated, or the nodes may have been removed from ComfyRegistry.
comfy-core
comfyui-kjnodes
I decided to re install comfyui and Im still having the same issue
I’m having this issue on a lot of workflows recently for some reason
same issue was fine a few days ago fresh install same issue
Actually you cannot use the comfyui-app from https://www.comfy.org/download . It is not updated (enough). You have to install comfyUI via GIT...
I have an RTX 4090 and 64GB of RAM. I don’t know why, but when using your Workflow 1.8.1, sometimes everything runs fine, and other times it's very slow — plus the whole computer starts lagging so much that even the sound stutters and there's a buzzing noise. It's as if the CPU and GPU are at 100%, and the whole 20-step generation process takes 10 times longer than it should.
I thought I was the only one, I still use 1.5 and 1.4 and the same thing happens. 😅 (4090 but only 32 gb ram)
@Eliz103 on 1.5 also this happens ?
This looks like a GPU memory overflow, which happened to me, and I switched to Q6 model.
When you use a model that is already at the limit of your VRAM this happens after a few renders
As an alternative to blockswapping (which I haven't been able to get working so far) try using the UnetLoaderGGUFAdvancedDisTorchMultiGPU node from comfyui-multigpu for loading the Q8_0 checkpoint ; and set virtual vram to something like 4,8,10,12,14,16; mine is set to 16.5 on 12 gb 3080ti; seems to help a lot and allow 16 gb models on 12 gb cards; it might help 4090 users also until everything is ironed out; this allows me to run larger models on my small card, with 64gb ram, and it uses ram as a cache instead (typically around 40gb) very rarely does it slow down, but it happens occasionally depending on how long the video is. I'm still keeping my eye out for a .safetensor loader with the same virtual vram option, which I would prefer over GGUF.
Something's not right with these workflows.
Freshly started computer, ComfyUI — the first video generation goes fine, the second probably too, but then by the third or fourth everything starts falling apart.
It slows down drastically, the whole computer freezes, and there's a buzzing sound from the speakers.
It looks like the entire memory is getting clogged.
I'd really appreciate it if the author could look into this. Maybe memory needs to be cleared or released after each generation? Something like that?
1st generation: 21.75s/it
3rd generation 127.20s/it
I really don’t want to restart everything from scratch after each generation.
Is it just me experiencing this?
RTX 4090, 64GB RAM, AMD 7950X
I also have some freez during the last operation of the upscal step during some minutes. Not only on the Version 1.8, on every version.
RTX 5090, 64 Gb, AMD 9950x3D
@Allerias I have made a new upscaler available on the beta version of the workflow:Development Workshop: WAN2.1 IMG to VIDEO | Civitai
@UmeAiRT Just testing, so far good.
Great workflows! Could you please add Start/End frame functionality? (https://github.com/raindrop313/ComfyUI-WanVideoStartEndFrames)
Insane workflow!! works great on a 4070 ti super (16GB)
Thanks for your feedback !
cant wait to try out the latest version. Only suggestion I have would be to replace the guidance node with the new CFGZeroStar node.
CFGZeroStar node is in the last version, but just not enabled by default
https://github.com/Flow-two/ComfyUI-WanStartEndFramesNative
This custom nodes allows to set last frame as input and works with GGUF model.
Changed workflow a bit with this custom nodes.
Here is example https://civitai.com/posts/15113939
Drag and drop video from this post into ComfyUI to import workflow.
are you using inpainting and controlnet to change the pose of the character while keeping the rest of the image somewhat the same?
I am using https://github.com/yankooliveira/sd-webui-photopea-embed + inpaint. Input images generated in Forge WebUI
@Discocat Great, thanks for the reply!
It works great!! But I can't manage to get a perfect loop using identical start/end pics. The motion itself does loop correctly, but for some reason, as the frames progress, the entire scene subtly darkens(or the image gets brighter). You hardly notice it until it loops back to frame 1 and lightens up again to how it started. I've tried it with and without teacache, seagate, used gguf 480 and 720, but nothing's fixing it.
Although well, I suppose it is not designed for loop, I will have to wait for more versions
@EechiZero Try workflow from this post https://civitai.com/posts/15192134
You will also need new model for it https://huggingface.co/city96/Wan2.1-Fun-14B-InP-gguf/tree/main
Friends with limited memory suggest changing the order of insertion and enlargement in the workflow. It is recommended to enlarge first and then insert again. After testing, it can significantly reduce memory usage
By the way, the speed will also be significantly increased
what are you referring to?
To be honest I don't understand what you mean.
@UmeAiRT I guess he is referring to frames interpolation(insertion) and upscale(enlargement).
@NoFuckingWay yes,Swapping their execution order would be much better
@bengbeng New upscaler and lot of new feature are on devlopement, you can take a look here : Development Workshop: WAN2.1 IMG to VIDEO | Civitai
@UmeAiRT thanks
Is it just me or the loras i put aren't even effective? This was on 1.8.1
EDIT: i forgot that the CFG Zero makes it output really weird movements somehow
I had the same issue of loras not working, using 1.8.0.
I switched to 1.8.1 and now loras are working again. Maybe it's the new version, or maybe I just needed to start the workflow from scratch again
@GushingOverPixels i think i did see the loras work, but im still struggling with CFG zero, or probably i was using really low steps (8 - 14 steps)
@monkpostor CFG Zero is very experimental, i have very inconsistent results but I leave this option for those who want to use it.
This is the BEST workflow for Wan I2V. Really fast, easy to understand and has absolutely everything you need. Thank you for your work.
Sometimes body parts in my videos turn orange or red when using certain Loras. Is that a weight problem or something like CFG, Shift, or Teacache/Sage
Notice the same. DOnt know what is the problem.
Any plans on making a workflow that combines I2V with video extension? Seems like the next logical step to make videos of unlimited length in a single workflow
can someone explain to me what the automatic prompt feature is, and how it should be used?
If you activate it in addition to what you put in the "Positive" node, a continuation will be automatically created from your image
comfyui update broke mxtoolkit, was a very good workflow while it worked hah
Change the mxtoolkit version from nightly to the latest number.
after changing mxtoolbox I'm able to move sliders back, but I'm getting error from custom script about image reference
I keep getting 'UnetloaderGGUF' 'unknown model architecture' even though o installed the recommended model. i tried updating it and that didnt work either,
I am currently finishing the new version of the workflow with a new model loader.
@UmeAiRT thanks it works now!
Even i am facing the same issue, the issue is coming in "tools/convert" python file at line number 98. Did you find the solution and the gguf workflow worked for you?
@AISlooty Is the GGUF workflow worked for you or the normal one? As i have only 6GB Vram, I must you GGUF Version.

