CivArchive
    Wan2.2 I2V PainterI2V Workflow Kenpechi - v2.4
    NSFW

    v2.4 - The subgraph specification in the model input area has been discontinued and reverted to the v2 specification. The layout has also been changed.

    We received several reports from the community that models such as CLIP and VAE were not functioning correctly due to the subgraph, and also received feedback that the model placement was unclear. Therefore, we decided to revert to the v2 specification.

    v2.3 - Layout has been adjusted.

    v2.2 - The Seed node has been changed from "CR Seed" to "Seed (rgthree)". We've received reports of issues with the CR Seed implementation, so we've made it consistent with the custom node commonly used for this workflow.

    v2.1 - The GGUF model loader is now available as standard. The input method for the model area has also been changed, allowing bulk input using sub-graph nodes.

    The number of frames per second was previously fixed at 16 fps, but can now be changed arbitrarily. Accordingly, the RIFE-VFI node multiple can now be changed in the input area.

    v2.0 - The issue where the second section would not generate has been fixed.

    The explanations in this workflow are solely my personal opinions. Please be aware that I do not possess expertise in AI generation, and therefore some information may be inaccurate.

    The main purpose of this workflow is to make the operation compact when performing repeated generation. It minimizes screen scrolling during operations such as prompt input, input image selection, specifying time, step count, resolution, and, most importantly, LoRa selection. Furthermore, all nodes are fixed to prevent unintended movement, improving usability.

    Links to the Basic models of the Wan2.2

    CLIP:

    https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders

    VAE:

    https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/vae

    Links to the LORAs and nodes used in this workflow

    (PainterI2V)

    https://github.com/princepainter/ComfyUI-PainterI2V

    (PainterI2V Advanced)

    https://github.com/princepainter/ComfyUI-PainterI2Vadvanced

    (FFGO)

    https://github.com/zli12321/FFGO-Video-Customization

    https://huggingface.co/Video-Customization/FFGO-Lora-Adapter/tree/main/merged_lora

    PainterI2V is advertised as improving slow motion, but it also enhances camera work. Using it should improve camera movement. Since I intentionally incorporate camera movement into my videos, this node is essential to my workflow.

    FFGO is a LORA designed to maintain consistency in the input image. Maintaining facial consistency is crucial, especially when dealing with female characters, and I believe FFGO helps in this regard. I have set the weight to 0.3, but feel free to adjust it. However, be aware that too high a weight may affect the video's movement.

    You can also generate and combine two videos, in which case the final frame of the first generation becomes the starting image of the second generation. Unlike SVI, there is no 5-frame overlap, and the content of the first input image is not preserved. Therefore, depending on the final frame, the face may change significantly, or the movement may appear unnatural.

    For example, if you look closely at the beach video of a woman I uploaded, you'll notice that a person who wasn't in the first 5 seconds appears in the background, and the movement at the transition is clearly unnatural. These unnatural artifacts are a major drawback. However, if the woman's face is clearly visible in the first generation, it won't become a different person in the second generation.

    So why combine the two videos using this workflow instead of SVI? Because trying to reproduce the same movement with SVI results in extremely unnatural movement and doesn't work well. This is one of the challenges of video generation with SVI. I believe this workflow is suitable for achieving dynamic yet natural movement.

    Technically, it's possible to loop the second video or combine a third or subsequent video, but in my experience, the resulting video becomes unusable. I consider two videos to be the practical limit.

    I hope this workflow helps make video creation using PainterI2V and FFGO more enjoyable.

    Description

    v2.4 - The subgraph specification in the model input area has been discontinued and reverted to the v2 specification. The layout has also been changed.

    We received several reports from the community that models such as CLIP and VAE were not functioning correctly due to the subgraph, and also received feedback that the model placement was unclear. Therefore, we decided to revert to the v2 specification.

    v2.3 - Layout has been adjusted.

    v2.2 - The Seed node has been changed from "CR Seed" to "Seed (rgthree)". We've received reports of issues with the CR Seed implementation, so we've made it consistent with the custom node commonly used for this workflow.

    v2.1 - The GGUF model loader is now available as standard. The input method for the model area has also been changed, allowing bulk input using sub-graph nodes.

    The number of frames per second was previously fixed at 16 fps, but can now be changed arbitrarily. Accordingly, the RIFE-VFI node multiple can now be changed in the input area.

    v2.0 - The issue where the second section would not generate has been fixed.

    FAQ

    Comments (24)

    dindonndonFeb 28, 2026
    CivitAI

    I don't quite understand how to install FFGO lora

    kenpechi
    Author
    Feb 28, 2026· 2 reactions

    There's a link on the GitHub page, but I'll just put the link here.

    https://huggingface.co/Video-Customization/FFGO-Lora-Adapter/tree/main/merged_lora

    dindonndonMar 2, 2026

    @kenpechi Thank you so much!

    jasonzwp120645Mar 1, 2026
    CivitAI

    I'm encountering a problem with facial consistency; even on the first generation, the face changes.

    kenpechi
    Author
    Mar 1, 2026

    There are various factors that can cause facial consistency issues.

    It depends on the model and LORA you're using, the video size, and the resolution of the original image. Of course, it's also possible that FFGO's LORA hasn't been installed correctly.

    By the way, I used the official model, generated from a 1024x1536 original image, with a video size of 720x1072, using HIGH3 steps and LOW4 steps, and the sampler is euler-simple.

    For reference, please tell me about your situation.

    jasonzwp120645Mar 1, 2026

    @kenpechi Thanks for your reply. I used your workflow and the same LoRa model, and the situation was somewhat improved at high resolution. The current problem is insufficient penis detail. Should I include the penis in the reference image?

    kenpechi
    Author
    Mar 1, 2026

    @jasonzwp120645 By the way, what model is it? Smooth Mix?

    jasonzwp120645Mar 1, 2026

    @kenpechi wan2.2_i2v_high_noise_14B_fp8_scaled

    kenpechi
    Author
    Mar 1, 2026

    @jasonzwp120645 OK, so you want a penis to appear when there is no penis in the reference image?

    jasonzwp120645Mar 1, 2026

    @kenpechi haha yes

    kenpechi
    Author
    Mar 1, 2026

    @jasonzwp120645 I understand. This model does not contain any NSFW content, so you'll have to rely on LORA for penis rendering.

    Penises are still okay, but pussies are really terrible...

    For example, the "POV insertion" LORA is specifically designed for penises, so it renders penises beautifully, but other features may be difficult.

    For example, why not try the following LORA?

    https://civitai.com/models/1387077/wan-21-erect-penis-cock-dick-lora-i2v?modelVersionId=1567538

    This is for wan2.1, but it works. However, it may not always work, so you may need to try a few times. I think breasts will also render beautifully if you add a LORA related to breasts. Pussy is the most difficult.

    Give it a try! Have fun!

    jasonzwp120645Mar 1, 2026

    @kenpechi Thank you so much, this was a great help.Have a wonderful day!

    BretChampagneMar 5, 2026
    CivitAI

    Hi @kenpechi , thanks a lot for sharing your very well commented workflow :-)
    Just took the PainterI2VAdvanced node (was already using the PainterI2V, but I m curious about the "color protect" feature) + FFGO (very curious about it, still have to do some testing...)
    Congrats for your lovely clips ;-)
    Cheers !

    kenpechi
    Author
    Mar 5, 2026· 1 reaction

    Thank you. I hope I can be of some help to you.

    BretChampagneMar 6, 2026· 1 reaction

    @kenpechi no help needed, tried a SVI workflow for long vids today with no luck... The more I try new workflows, the more I like my old one...
    But I adopted this "PainterI2VAdvanced" node... Even if it may be "overkill" combined with this wan version I use very often : https://civitai.com/models/2053259?modelVersionId=2372875
    (makes the girls look a little bit too excited but the prompt adherence is really good)
    Anyway, I love your clips and I recently "stole" you a few pieces of prompt & loras mixings :
    https://civitai.com/images/123262687
    Or this one
    https://civitai.com/images/123262776
    (I cheated by inverting the timeline while editing and it made a strange "breasts-coming-back-into-sweater" effect ... that I liked after all XD)
    Didn t know we could be such precise about camera moves with Wan...
    I m not a "Copycat" person, so don't worry, I won't copy/paste your prompts anymore (but they are very instructive !)
    Thanks again for sharing :-)

    babesgarden968Mar 16, 2026
    CivitAI

    Any chance to save your videos with metadata?

    kenpechi
    Author
    Mar 16, 2026

    Almost all of my videos contain metadata.

    johhnnymann1Mar 24, 2026
    CivitAI

    Do you use base wan2.2 high/low models on all your videos? I had trouble with it when I first started two months ago so quickly ditched it for others, but your outputs are very good so I figure I need to go back and give it another shot. Alternatively, do you always use the high lightx2v lora? I head running only the low can be quite good for motion sometimes.

    kenpechi
    Author
    Mar 24, 2026

    I use both High and Low settings for all my LORA samples.

    Lightx2v Lora is very difficult to work with.

    As you mentioned, you could avoid using High, or use three Samplers to use Non, High, and Low.

    Honestly, I don't know what the best approach is, but this is the workflow I've arrived at.

    GameAlan83Mar 25, 2026
    CivitAI

    Hi, may I know what's the differnce between 1st_second and 2nd_second in the work flow?

    I have read the note on the left and saw it

    "....They are finally added together."

    But no matter how I cahnge the time length in 2nd_second it seems does not change the total time at all. Making me confuse.

    My setting is like 1st_second = 7s, 2nd_second = 5s.

    At the end it generates a 7s video instead of 12s video.

    Anyone who can tell me how to make a long video (20s plus without loopback? 4090 user) THX!!

    kenpechi
    Author
    Mar 25, 2026

    Are the 2nd section and 2nd video disabled?

    Please enable them using the fast Bypass node.

    GameAlan83Mar 27, 2026

    @kenpechi Thank you!!!

    cozmonatuMar 25, 2026
    CivitAI

    New to all this but how do you adjust the number of video frames (or length of the vid) generated? I cant find it anywhere. Thanks.

    kenpechi
    Author
    Mar 25, 2026· 1 reaction

    Enter the time in seconds into the 1st & 2nd second nodes. For fps, enter 16 as a default. For example, 5 seconds will generate a video with 81 frames (5 x 16 + 1).

    Note: Beginners are advised to use the official workflow before using this one. Gaining experience is essential. Order is important in everything.

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    5,491
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/28/2026
    Updated
    5/1/2026
    Deleted
    -

    Files

    wan22I2VPainteri2v_v23.zip

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)

    wan22I2VPainteri2v_v20.zip

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)