CivArchive
    WAN2.2 I2V GGUF NSFW (8GB VRAM / 32GB RAM) WORKFLOW - v1.3 (SageAttention)
    NSFW
    Preview 118329056
    Preview 118929600
    Preview 118929593
    Preview 118929591
    Preview 118984984
    Preview 118984964

    GOONING WORKFLOW FOR THE VRAM POOR!

    If you are VRAM poor just like me, this workflow is for you! You can generate NSFW videos with just 8GB VRAM and 32 GB RAM. Maybe even with lower specs if you use lower GGUF models. Everything is written as notes in the ComfyUI workflow but I will write it them again here.

    IMPORTANT:

    TastySin Q8 GGUF version requires sage attention and nightly pytorch version to be installed. Unfortunately, I cannot provide tech support for those. Please spend some of your time and install them. They are worth it!

    Always check the "About this version" from the right side to see the difference between workflows.


    STEP 1 - MODELS

    WAN2.2 I2V A14B GGUF:

    Put WAN GGUF, SMOOTH MIX GGUF, or TASTYSIN GGUF models under unet folder.

    I recommend Q6 for 8GB VRAM, you can download smaller versions if you have less VRAM or bigger versions if you have more.

    Text encoder GGUF:

    I recommend Q5_K_M for 8GB VRAM, you can download smaller versions if you have less VRAM or bigger versions if you have more.

    VAE:


    STEP 2 - LORAs

    LoRAs:

    You need this LORA if you want to produce videos with 4 steps only.

    If you want to add more LORAs, just add them~!


    STEP 3 - IMAGE AND PROMPT

    START IMAGE:

    The image proportions should be the same as video generation proportions. For example, if you are putting a 16:9 image, your generation proportion should be 16:9. Otherwise, weird things might happen. I always recommend putting a higher resolution image, it will be auto downsized to the generation resolution.

    PROMPTS:

    Check CIVITAI generations for more prompts and keywords for the LORAs you are using. As for the negative prompt, I have no idea what's best. Chinese? English? Less keywords? More keywords? No idea.


    STEP 4 - WAN PROCESS

    STEPS ( ! IMPORTANT ! ):

    I have not seen much improvement when increasing the steps from 4 to 6. Leave them at 4 or experiment, up to you.

    VIDEO SIZE & LENGTH:

    The output is much cleaner and crispier if the INPUT images have the same dimensions. Use an image with higher resolution and it will be scaled down automatically. Just make sure the ratio (e.g. 16:9) is the same or similar.

    Dimensions must be divisible by 16! For quick generations (testing), use these dimensions:

    - 512 x 512 (SQUARE)

    - 432 x 768 (9:16)

    - 768 × 432 (16:9)

    - 640 × 480 (4:3)

    - 480 x 640 (3:4)

    For final render, this is what my machine was capable of (8GB VRAM / 32GB RAM):

    - 864 x 864 (SQUARE)

    - 544 x 960 (9:16)

    - 960 x 544 (16:9)

    - 896 x 672 (4:3)

    - 672 x 896 (3:4)

    LENGTH:

    81 frames for 5 seconds. I do not recommend trying longer or shorter duration using this workflow. It increased the generation time by a lot and the quality degrades. But if you must, the frame length must be divisible by 16 + 1.

    KSampler:

    Change noise_seed generation from "randomize" to "fixed" if you are happy with the testing result but want a higher resolution. Otherwise, leave everything else as it is if you do not know what you are doing.

    Motion Amplitude:

    Fixes no motion problem with WAN (e.g. camera rotation), this PainterI2V node is magic!

    • 1.0 (original) > No difference from the original WAN node

    • 1.15 (default) > General use

    • 1.3 > Sports action

    • 1.5 > Extreme motion


    STEP 5.1 & 6 - Upscale and Frame Interpolation

    DISABLE THESE WHILE TESTING OUTPUT (CTRL+B)

    UPSCALE:

    - 2x AnimeSharpV4

    This model works great with anime images/videos. Feel free try other models.

    FRAME INTERPOLATION

    Free FPS increase! If you do not like the results, you can disable the frame interpolation and just save the upscaled video.

    Description

    SageAttention GGUF Q8 Workflow for the GPU poor

    This workflow specifically designed for SageAttention + GGUF for PCs with at least 8GB RAM, 32GB DRAM with sage-attention, and pytorch2.7.0 nightly installed. Check my earlier workflow if you want to use other models or do not meet the requirements: https://civitai.com/models/2272369/wan22-i2v-gguf-nsfw-8gb-vram-32gb-ram-workflow

    Unfortunately, I cannot provide tech support on how to install sage-attention and this pytorch version. Please spend some of your time and do it. It is worth it!

    Changes:

    • Removed first frame / last frame node with PainterI2V. This node is magical! It makes actions like camera rotation, running, etc. much better.

    • Added Sage Attention + Model Patch Torch nodes. This increases generation time by 60%!

    • Removed the clip vision node (realized this was not needed, sorry)

    FAQ

    Comments (37)

    Jeannette_TaguelJan 25, 2026· 4 reactions
    CivitAI

    Some recommended upscalers:

    RealESRGAN_x2plus.pth (3x faster, better details/less 'grainy', comparing 2x-AnimeSharpV4_RCAN)

    2x-AnimeSharpV2_MoSR_Sharp.pth (2.5x faster)

    2x-AnimeSharpV2_MoSR_Soft.pth (2.5x faster)

    BSRGANx2.pth (a bit slower, but gives better details than RealESRGAN_x2plus. in case you need very detailled video and not in a hurry)

    There are some others interesting upscalers around (like 2x_Text2HD_v.1-RealPLKSR.pth and 2xVHS2HD-RealPLKSR.pth), but these are very dedicated upscalers. (Text and VHS restoration only)

    Thank you for the nice workflow :)

    durachellFeb 1, 2026
    CivitAI

    Cannot execute because a node is missing the class_type property.: Node ID '#190'

    ModFrenzy
    Author
    Feb 1, 2026

    I need more information to help you with that. Can you look for the Nodes Map (Shift + M), open every folder, and see if any of that highlights in red text. This error can mean many things. I need to know the specific node that's giving this error.

    Also, which version are you using?

    durachellFeb 2, 2026

    @ModFrenzy Thanks for replying. I'm on latest comfyui, what's yours? I'll just follow your version.

    ModFrenzy
    Author
    Feb 2, 2026

    @durachell If you are using the latest versions of my workflow, node #190 seems to be the PainterI2V node, please make sure you have installed that CUSTOM NODE properly. And, make sure you are using dimensions(resolution) that are divisible by 16. Otherwise, it will throw an error.

    durachellFeb 2, 2026

    I mean what version of comfyui you are using? Thank you.

    durachellFeb 2, 2026

    @ModFrenzy may I know the version of comfyui you use?

    ModFrenzy
    Author
    Feb 2, 2026

    @durachell ComfyUI version: 0.9.2 // ComfyUI frontend version: 1.36.14

    durachellFeb 2, 2026

    @ModFrenzy Thank you

    RexAe14Feb 4, 2026
    CivitAI

    Any suggestions on what to do if I want to run this on 8GB Rtx4060 and 16 GB 5600mhz ram?

    rashield710Feb 17, 2026· 1 reaction
    CivitAI

    I loved it, thank you so much!

    Do you think there's a way to add a feature to the workflow to increase the video generation time? At least by 10 seconds?

    ModFrenzy
    Author
    Feb 17, 2026· 1 reaction

    Besides SAGEATTENTION? No, not really :(

    rashield710Feb 17, 2026

    @ModFrenzy Oh, sorry! I worded my question poorly. I meant increasing the video length to 10 seconds. Do you think that's possible?

    ModFrenzy
    Author
    Feb 17, 2026

    @rashield710 LENGTH:

    81 frames for 5 seconds. I do not recommend trying longer or shorter duration using this workflow. It increased the generation time by a lot and the quality degrades. But if you must just increase the frames 16 per 1 second.

    Fluxcapacitor99Feb 19, 2026

    To keep it simple just make another workflow in comfyui to combine 2 clips.

    4 nodes total 2x load video(vhs)

    1x merge images(vhs)

    1x video combine(vhs)

    StevenArmstrongFeb 17, 2026· 1 reaction
    CivitAI

    That's a good one! With 8GB rtx3070 and 16GB ram it took only 11 minutes from start to upscaled and interpolated 544*960 res video. The only downside is 59GB swap file it required.

    aaasssMar 3, 2026
    CivitAI
    KSamplerAdvanced [WinError 2] The system cannot find the file specified.What's going on?


    aaasssMar 3, 2026
    The problem seems to lie in nodes 199 and 200.

    ModFrenzy
    Author
    Mar 5, 2026

    Nodes 199 and 200 requires SAGEATTENTION and PATCH TORCH. Please make sure those are installed properly. If you do not have SAGEATTENTION and PATCH TORCH, just use other version of the workflows, thank you!

    misstuko638Mar 19, 2026
    CivitAI

    ModuleNotFoundError: No module named 'sageattention'
    please help to solve this

    ModFrenzy
    Author
    Mar 19, 2026

    There are some Reddit posts on how to install Sageattention:

    https://www.reddit.com/r/StableDiffusion/comments/1j6kqtd/how_to_install_sageattention_easy_way_i_found/

    You can also ask ChatGPT to help you install it. Otherwise, use the previous workflows which do not use sageattention.

    AliusNextApr 1, 2026· 1 reaction
    CivitAI

    PainterI2V is missing and idk what the hell it is

    ModFrenzy
    Author
    Apr 2, 2026· 1 reaction
    AliusNextApr 4, 2026

    @ModFrenzy Thank you, but the result is very blurry, what am i doing wrong?

    katanawang57478Apr 6, 2026
    CivitAI

    Im facing this problem, anyone know how to solve this? plz help me......

    KSamplerAdvanced

    'utf-8' codec can't decode byte 0xc7 in position 3: invalid continuation byte

    ModFrenzy
    Author
    Apr 6, 2026

    That error:

    'utf-8' codec can't decode byte 0xc7 in position 3: invalid continuation byte

    means Python (and therefore ComfyUI) is trying to read a file as UTF-8 text, but the file is actually encoded differently (or is partially corrupted).

    What’s causing it (in ComfyUI specifically)

    In ComfyUI, this usually happens when:

    A workflow JSON is saved in a non-UTF-8 encoding

    A custom node file (.py) contains non-UTF-8 characters

    A text file (prompt, config, etc.) was edited in something like Windows Notepad with ANSI encoding

    A model or metadata file is being incorrectly read as text

    A bad copy/paste introduced invalid characters

    Quick fixes (try in order)

    1. Re-save the file as UTF-8

    If you know which file triggered the error:

    Open it in:

    VS Code

    Notepad++

    Convert encoding to UTF-8

    Save

    2. Check custom nodes

    If you recently installed nodes:

    Go to:

    ComfyUI/custom_nodes/

    Look for recently added folders

    Open .py files and ensure they’re UTF-8

    Or temporarily remove the new node and restart

    3. Workflow JSON issue

    If it happens when loading a workflow:

    Open the .json file in a code editor

    Re-save as UTF-8

    Or re-download the workflow

    4. Terminal encoding (Windows-specific)

    If you're on Windows:

    Run before launching:

    chcp 65001

    This forces UTF-8 in the terminal.

    5. Find the exact file (important)

    Run ComfyUI from terminal and look for a fuller traceback like:

    File "...", line X, in ...

    That line usually tells you which file is breaking.

    If it still fails

    Tell me:

    When the error appears (startup? loading workflow? running prompt?)

    Your OS (Windows / Mac / Linux)

    Whether you installed new custom nodes

    I can pinpoint the exact cause much faster with that 👍

    katanawang57478Apr 6, 2026

    @ModFrenzy thx OP, i figure out, my comfyui path contain chinese characters...... One more question, i saw a pair of lora node in workflow, high and low, but lots of lora models doesnt have a pair. So may i ask, is that kind of single lora model can be used in OPs workflow too? which lora node, high or low should be apply in, or both?

    ModFrenzy
    Author
    Apr 6, 2026

    @katanawang57478 You need to ask the creators of those LORAs, it's impossible for me to answer that question~

    katanawang57478Apr 7, 2026· 1 reaction

    @ModFrenzy OK, thx for the patient answer! i already make some wonderful videos, your workflow is awesome!

    A_White_SealApr 17, 2026
    CivitAI

    Hi everyone, I've already installed all the dependencies mentioned by Taaty Sin. When I disable frame interpolation and upscaling, I can run the workflow with three LoRAs loaded without any issues. However, once I enable interpolation and upscaling – using the recommended settings and two LoRAs (High+Low) – I run out of memory at the second KSampler (OOM, about 800MB short according to the log). My suspicion is that ComfyUI loads the interpolation and upscaling models at the same time as the rest of the pipeline and never releases them during video generation; if those models were loaded only after the initial generation finished, it might work correctly. I also noticed a custom node – I think it's called "PainterI2V" (not listed in the Custom Nodes Manager) – that crashes every second time I use it, forcing me to restart ComfyUI and regenerate. The author and some commenters mention that they run the workflow smoothly on 8GB VRAM + 16GB RAM, and I honestly don't understand how they tuned it.

    First of all, thank you for this amazing workflow – it makes high‑quality, slightly higher‑resolution results with Wan2.2 possible on 8GB VRAM. I really appreciate the extreme parameter tuning and structural optimizations. I'm not nearly as skilled at debugging as you; my only VRAM management trick is setting "Lowvram" in ComfyUI's server config. According to your README and user feedback, my hardware (8GB VRAM + 16GB RAM) should be able to complete a full generation using your recommended 4:3 resolution (not the test one – sorry, I'm away from my PC right now so I don't recall the exact numbers), with two LoRAs (High+Low), and with interpolation + upscaling enabled. But at the second KSampler I hit OOM. Could anyone share your ComfyUI settings (like --lowvram, --reserve-vram, etc.) and any VRAM‑saving details (e.g., node ordering, model unloading tricks, custom node settings)? I'd be extremely grateful for any help.

    I've asked AI before, but the solutions I got were all very generic and didn't really help. So I'm really hoping the author or someone who can run this smoothly could give me some specific advice or hints – I would truly appreciate it. By the way, my English isn't very good, and I'm afraid of making mistakes or misunderstandings in technical terms, so I used a translator. Thanks in advance.

    ModFrenzy
    Author
    Apr 18, 2026

    Hey, thanks for the detailed feedback!
    1. If PainterI2V is giving you problems, here's a couple things to try. Update everything (ComfyUI, other custom mods, etc.). Disable PainterI2V and use default WanImageToVideo.
    2. You can disable frame interpolation and upscaling while you are generating the video. After you generated the video, you can just run the workflow again with those nodes enabled. It will just do the upscaling and frame interpolation (make sure the NOISE SEED is NOT randomized)
    3. I'm not really an expert on optimization. I have created something that worked for me and shared it with you guys :D I'm sorry I couldn't help you more.

    A_White_SealApr 18, 2026· 1 reaction

    @ModFrenzy Hi,

    I'm surprised by how quickly you replied – it truly gave me some comfort when I was feeling overwhelmed. Since I won't have access to my computer for a while, I'll try updating all the components later (I might end up just tolerating PainterI2V's crashes and constant restarts, because I saw on that custom node's GitHub README that it seems to improve the slow-motion issue caused by acceleration when generating with Wan2.2). After all, your workflow is meticulously tuned – especially the motion_amplitude and SHIFT parameters. Even though I understand from your comments what these parameters do and how to adjust them (honestly, I'm just lazy [facepalm]).

    Regarding frame interpolation and upscaling: I've heard elsewhere that this two‑step method is possible, but I was confused about how to actually do it. Now I fully understand, and I'll follow your advice. Thank you very much for the further guidance and reminders.

    Finally – though this might sound like flattery – I had previously tried to build (or rather, "reference and hack") the official Wan2.2 workflow template. As I mentioned before, I know almost nothing about VRAM optimization and management compared to you. I used Q4KM quantization (and heavily quantized everything else), and generating a 640x480 video took 40 minutes, with terrible results. Your approach is fully laid out in what you call your "share" – not only do you let people who don't like to tinker use your solution and get better results, but you also show those who do like to tinker a new direction (like SageAttention). That's a very selfless and constructive way to share.

    Anyway, thank you again for your guidance!

    A_White_SealApr 21, 2026· 1 reaction
    CivitAI

    (If someone running into similar issues, maybe you'll find a solution here)

    Thank you very much for this workflow. After several days of tweaking, I finally got it running stably on my machine. I'd like to share some settings, adjustments, and suggestions for getting this workflow to run reliably on a PC (I'm using a laptop) with 8GB VRAM and 16GB RAM.

    1. ComfyUI version & environment setup

    I recommend using the portable version of ComfyUI, but don't start with the very latest release – the PyTorch and Python versions are too new, and I couldn't find compatible versions of sageattention and triton. I downloaded an older build with Torch 2.9.1 + cu130 and Python 3.13.9. First install triton (<3.6). Find a precompiled wheel for sageattention (version 2.2.0, compatible with cu130 and torch 2.9.0). Also get the triton‑windows package, download the matching version, extract it, and copy the include and libs folders into your ComfyUI environment directory, overwriting when prompted. This step is very important – otherwise the KSampler will crash. (I'm not sure if newer versions already include these libraries; my version was missing them, so I did this manually.) Note that you may need MSVC installed on your PC – I saw some triton compilation errors in the terminal. Once the environment is set up, you can update ComfyUI (or you'll have to tolerate various low‑version warnings and hints).

    2. Launch arguments

    I recommend adding the following arguments:

    set PYTORCH_ALLOC_CONF=expandable_segments:True,max_split_size_mb:256 (this helps prevent OOM at the second KSampler)

    --reserve-vram 0.5

    --lowvram (required if you use the Q8 Unet)

    3. Model choices

    Unet: Q8 quantization works great.

    CLIP: Q5KM.

    For other models, follow the author's recommendations.

    Loading two LoRAs (High+Low) is the upper limit – at least on my system.

    4. Workflow modifications – removing "junk nodes"

    In the workflow, go to the two Sage+Torch+NAG+Shift nodes and delete the SD3 sampler – this node is problematic, and the shift value is unnecessary. It affects the text condition vector and produces bad results (e.g., text written on the image + wavy noise). To compensate for the author's point about shift values reducing slow motion, you can adjust the PainterI2V parameters instead.

    Also delete the get nodes hidden behind the WIDTH and HEIGHT nodes connected under PainterI2V – those two are completely redundant.

    Since this node runs very fast (both TastySin and Dasiwa's Unets already have baked lightning + sageattention acceleration), I recommend using the unipc_bh2 simple sampler for KSampler; no other parameter changes are needed.

    Replace the VAE decoder with a tiled VAE decoder with parameters: 512, 128, 32, 8.

    5. Two‑step generation and VRAM cleaning

    As the author suggested, first generate a test‑resolution video. Then on the second run, switch to 896×672 (or another resolution) – this way ComfyUI doesn't need to reload CLIP, LoRAs, etc., making the process more stable. (Generating directly at high resolution also works fine.)

    For frame interpolation and upscaling, follow the author's two‑step method, otherwise OOM occurs. From my testing, you also need to add a VRAM Clean node before both interpolation and upscaling to prevent OOM.

    Final thoughts

    Overall, this workflow is excellent. The sageattention mechanism in particular enables fast and stable generation of high‑resolution, low‑quantization (i.e., high‑quality) videos on 8GB VRAM – something I never thought possible given my limited knowledge.

    That said, some of the steps above might be redundant – my debugging process was a bit too convoluted (laughs). So take this only as a reference (but if you're running into similar issues, maybe you'll find a solution here). The general approach is:

    Install dependencies according to your versions → If KSampler crashes, manually copy the missing environment libraries → Adjust launch arguments → Tweak the workflow moderately.

    Thanks again!

    A_White_SealApr 21, 2026

    Oh, by the way – the PyTorch 2.7.0 nightly that the author requested might not be necessary anymore. The new stable version of torch may have already incorporated the features that were tested in the older nightly builds and made them available in the stable release. At least I haven't encountered any compatibility issues or error messages related to torch.

    ModFrenzy
    Author
    Apr 21, 2026

    Amazing detailed feedback! This would definitely help many people with similar PC specs to yours. I'm planning on updating this workflow soon as well. I also realized some of the redundancies you pointed out.

    As for the SHIFT value, I was not able to produce some NSFW actions (e.g. footjob) without raising the SHIFT to 8 or 10. So, I'm really not sure if it's redundant. I need to do further testing.

    As for frame interpolation and upscaling, I also recommend just moving them into a separate workflow. So, you can do that step after you generated your videos. It does not have to be in the same workflow especially if you have less than 32GB ram.

    A_White_SealApr 22, 2026

    @ModFrenzy Oh, sorry. I've done some more testing and found that what affects the conditioning is neither the SD3 sampler (though I still don't understand why the SD3 sampler node is connected in the Wan workflow – even if the SHIFT value does help with motion) nor NAG. It's actually PainterI2V. I also saw the same issue in the Issues section on its GitHub page – someone pointed out that setting the parameter too high may cause the model to ignore the prompt's background descriptions and image colors. In my case, the model was writing text in video. However, PainterI2V is very effective, so for normal generation, don't set its parameter above 1.15 (when you need large motion, you can set it higher, because camera movement and subject movement will override PainterI2V's 'rendering bias'). What I mean is: if you're generating SFW content, such as using a Live2D LoRA or scenes without much change, don't set this parameter above 1.15 – otherwise the model will completely ignore the prompt and produce random results. As for the parameters that enhance motion, it's better to adjust the SHIFT value of the SD3 sampler instead...

    A_White_SealApr 22, 2026

    Also, I still don't fully trust the NAG and SD3 nodes – after all, they are connected to conditioning – so I simplified my negative prompt, removing redundant descriptions and some unnecessary ones ;w;

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,496
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/22/2026
    Updated
    4/30/2026
    Deleted
    -

    Files

    wan22I2VGGUFNSFW8GBVRAM_tastysinQ8GGUF.zip

    wan22I2VGGUFNSFW8GBVRAM_v13Sageattention.zip