v2.4 - The subgraph specification in the model input area has been discontinued and reverted to the v2 specification. The layout has also been changed.
We received several reports from the community that models such as CLIP and VAE were not functioning correctly due to the subgraph, and also received feedback that the model placement was unclear. Therefore, we decided to revert to the v2 specification.
v2.3 - Layout has been adjusted.
v2.2 - The Seed node has been changed from "CR Seed" to "Seed (rgthree)". We've received reports of issues with the CR Seed implementation, so we've made it consistent with the custom node commonly used for this workflow.
v2.1 - The GGUF model loader is now available as standard. The input method for the model area has also been changed, allowing bulk input using sub-graph nodes.
The number of frames per second was previously fixed at 16 fps, but can now be changed arbitrarily. Accordingly, the RIFE-VFI node multiple can now be changed in the input area.
v2.0 - The issue where the second section would not generate has been fixed.

The explanations in this workflow are solely my personal opinions. Please be aware that I do not possess expertise in AI generation, and therefore some information may be inaccurate.
The main purpose of this workflow is to make the operation compact when performing repeated generation. It minimizes screen scrolling during operations such as prompt input, input image selection, specifying time, step count, resolution, and, most importantly, LoRa selection. Furthermore, all nodes are fixed to prevent unintended movement, improving usability.
Links to the Basic models of the Wan2.2
CLIP:
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders
VAE:
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/vae
Links to the LORAs and nodes used in this workflow
(PainterI2V)
https://github.com/princepainter/ComfyUI-PainterI2V
(PainterI2V Advanced)
https://github.com/princepainter/ComfyUI-PainterI2Vadvanced
(FFGO)
https://github.com/zli12321/FFGO-Video-Customization
https://huggingface.co/Video-Customization/FFGO-Lora-Adapter/tree/main/merged_lora
PainterI2V is advertised as improving slow motion, but it also enhances camera work. Using it should improve camera movement. Since I intentionally incorporate camera movement into my videos, this node is essential to my workflow.
FFGO is a LORA designed to maintain consistency in the input image. Maintaining facial consistency is crucial, especially when dealing with female characters, and I believe FFGO helps in this regard. I have set the weight to 0.3, but feel free to adjust it. However, be aware that too high a weight may affect the video's movement.
You can also generate and combine two videos, in which case the final frame of the first generation becomes the starting image of the second generation. Unlike SVI, there is no 5-frame overlap, and the content of the first input image is not preserved. Therefore, depending on the final frame, the face may change significantly, or the movement may appear unnatural.
For example, if you look closely at the beach video of a woman I uploaded, you'll notice that a person who wasn't in the first 5 seconds appears in the background, and the movement at the transition is clearly unnatural. These unnatural artifacts are a major drawback. However, if the woman's face is clearly visible in the first generation, it won't become a different person in the second generation.
So why combine the two videos using this workflow instead of SVI? Because trying to reproduce the same movement with SVI results in extremely unnatural movement and doesn't work well. This is one of the challenges of video generation with SVI. I believe this workflow is suitable for achieving dynamic yet natural movement.
Technically, it's possible to loop the second video or combine a third or subsequent video, but in my experience, the resulting video becomes unusable. I consider two videos to be the practical limit.
I hope this workflow helps make video creation using PainterI2V and FFGO more enjoyable.
Description
v2.3 Alt - This is an another version of v2.3 that does not use the subgraph in the model input area. We received reports from the community that the subgraph in the model area was causing problems, so we decided to release a version with the subgraph removed.
Please try using this version if you are experiencing errors with the regular v3.4. Also, please be sure to update the ComfyUI core and custom nodes to the latest versions. Numerous bugs have been reported in older versions.
FAQ
Comments (21)
Subgrafs and problems with it is still there =(
Hi i'm very new to this AI using. I want to try out your workflow, but i can't find the models you used.
Can you provide me an link to those models?
- lightx2V
- FFGO LoRA
- Safetensor models
I'm sorry for my lack of knowledge. your works was so awsome.
safrtensors model
https://civitai.com/models/1817671?modelVersionId=2057465
lightx2v
https://civitai.com/models/1585622?modelVersionId=2361379
FFGO
https://huggingface.co/Video-Customization/FFGO-Lora-Adapter/tree/main/merged_lora
However, since all of this information is available on this page, if you're asking questions like this, you won't be able to master this workflow.
I understand your feelings, but you should first gain experience by repeatedly generating data using the official workflow before coming back here.
Also, I don't provide lectures for beginners. I'm dedicating this time specifically to intermediate users who expect better results with this workflow.
Please try researching and experimenting on your own first. I learned by absorbing information from many people myself.
@kenpechi Thank you for the link and advice. I certainly will start from basic workflows.
Really cool workflow, nice work
I wasn't aware of FFGO loras and it works pretty well with the few try I made, I'v added some watermark node just for my use, and mmaudio at the end, sorry I didn't cleaned everything so it's a bit messy if you look at the video I'v posted,
the only things for me to change is the Width and Height node, this is pretty annoying to deal with if you want to change resolution, If you want I can provide you a template for a subgraph with toggle for resolution and automatic image scaling
Can you publish related workflows on runninghub platform? I'm looking forward to seeing your release link,Also, I've used a lot of workflows on the running platform, but none of them seem to have the same effect as you
Understood, I'll consider it. However, I wasn't familiar with Runninghub until now, so please give me some time to check it out.
I tried it, but I don't think subgraphs are supported on Runninghub. It seems different from the local ComfyUI. Without subgraphs, I can't maintain this quality, and besides, I avoid anything that involves revenue like this site.
Therefore, I'm sorry, but I'll have to postpone implementation.
@kenpechi It's okay, my friend, I'm just getting started with workflows, so I want to learn from your experience. However, due to device limitations, I can only edit and deploy on runninghub cloud at present
I keep getting the error
ModelSamplingSD3
AttributeError: 'NoneType' object has no attribute 'clone'
Anyone know what this is caused by?
It appears that data is not being received correctly by "ModelSamplingSD3". Is "Set Model High or Low" in the "Other Models" group on the left bypassed? Or is "Get Model High or Low" in the "ModelSamplingSD3" group bypassed?
just learning how I2V works (so far I only did Image Generation) and your workflow (v2.4) helps a lot. It worked right out of the gate (only had to deactivate Sage Attention since I don't have that installed yet). On 32 GB RAM and RTX 4080 with 16 GB VRAM btw.
All my generated videos are coming out blurry, how do I fix this? can you help me?
All my generated videos are coming out blurry, how do I fix this?
It's likely that one of the models is not configured correctly. Please check that Lightx2v lora, SVI lora, Text Encoder, VAE, etc., are configured correctly, including weights and High/Low settings.
Could you please tell me which models you used and where each one goes? I'm using the ones from the links you posted. However, I wanted to know which model goes in: LOAD DIFFUSION, LIGHT LORE HIGH AND LOW, FFGO LORA, LOAD CLIP, and LOAD VAE.
@junioralex3030640 It would be faster if you downloaded my video and dragged and dropped it into ComfyUI. Try comparing them then.
Hello, now I can run this workflow smoothly. However, as it progresses, it causes facial distortions. How can this be resolved? Also, I am unsure about the choice of Lora.
It's somewhat normal for faces to become distorted.
Video generation AI will inevitably distort faces over time.
Especially when generating two sections in this workflow, if not done correctly, faces will not only become distorted but will transform into completely different people.
So, how can you reduce face distortion? You can increase the video size, choose reference images with larger faces, and so on.
In short, you'll just have to experiment.
Good luck!