CivArchive
    Wan POV Blowjob - v1.2 T2V
    NSFW

    Version 1.2 Update: Higher quality and expanded dataset. It can now do cumshots where the woman keeps sucking and you can see the cum dripping out of her mouth. The I2V version can handle images where a penis does not start out in the image.

    Note that all of the previews were created using the lightx2v (self forcing) lora. You may get better movements without it, but I didn't do much testing without it. The workflows I used can be downloaded by getting the "Training Data".

    T2V tips: It actually seems to still maintain motion very well at 0.7 strength, and you'll get more facial variety that way. I seem to get best results at 0.95 strength for whatever reason. You can add in a penis lora to help it out.

    I2V tips: If the penis does not exist in the starting image, you probably want to add a penis lora to your workflow so it has a better shape. You can trigger it with something like:

    A penis appears from the bottom of the frame centered at the bottom and pointed straight up.

    If you want cum to appear:

    White cum shoots out of the penis.

    This was trained on 4 different angles, here is how to trigger them:

    A woman is lying on her stomach between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.
    An overhead view of a woman kneeling between the legs of the viewer and performing oral sex on a man. She moves her head back and forth as she sucks the penis.
    A woman is leaning over a man positioned in between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.
    A woman is kneeling in between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.

    Description

    Higher quality training data and expanded dataset

    FAQ

    Comments (49)

    makiaeveliJul 18, 2025· 6 reactions
    CivitAI

    these look wan-derful

    crombobularJul 18, 2025· 7 reactions
    CivitAI

    the sequel we've all been waiting for

    asdfghjkl051824927Jul 18, 2025
    CivitAI

    我使用起來會改變陰莖以及女人的樣子,似乎強度不能開太高,且無法深喉。

    2410747553660Jul 30, 2025

    我也是这样,有什么解决方法吗

    2410747553660 目前我就是降低權重,且加入其他lora。 沒別的辦法,希望能看到作者可以更多的修正。

    seanhhhJul 19, 2025
    CivitAI

    It's hard to make the details of the woman's cheeks denting when she sucks. Am I forgetting something?

    dtwr434
    Author
    Jul 19, 2025

    Hmm, not sure. It always just kind of happened automatically for me. Did you try my example prompts? Are you using T2V or I2V?

    HomeBoxOfficeJul 24, 2025· 3 reactions
    CivitAI

    is there a specific command/trigger for getting the cum while sucking effect, thanks looks great.

    dtwr434
    Author
    Jul 24, 2025· 2 reactions

    I think any mention of "white cum" will do it. In hindsight, I maybe shouldn't have mentioned the color in the captioning, because it doesn't seem to work very well if you just say cum by itself.

    aiv_creatorAug 1, 2025· 20 reactions
    CivitAI

    IMPORTANT: INFORM ALL WAN USERS! I DISCOVERED HOW TO USE LORAS 'NSFW/SFW' IN WAN 2.2 T2V, GETTING THE FULL POTENTIAL OF "HIGH NOISE" AND "LOW NOISE." THE RESULTS WERE IMPRESSIVE!! I ran it on an RTX 3060 6GB, 32MB RAM, 480x480, length: 41, generation time: 5 minutes to generate each video. The quality is fantastic! For those of you with more powerful computers, the results will be even more incredible. I used the models: WAN 2.2 - Q4_1.gguf FOLLOW THE STEPS BELOW: 1: Don't use acceleration/enhancement LoRas (such as Lightx2v or other) on "high noise"; use only on "low noise" with strength 1. (I'm using this version of Lightx2v LoRa): lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16 2: Use 'NSFW/SFW' LoRas on both "high noise" and "low noise" models. On "high noise," use strength 2, and on "low noise," use half the strength, with strength 1. You can vary the strengths as you like, but always keep the "low noise" strength at half the strength of the "high noise." 3: Use these settings in KSamplers: "high noise": steps: 9 / cfg: 3.5 / euler/simple / start_at_step: 0 / end_at_step: 1 "low noise": steps: 9 / cfg: 1.0 / euler/simple / start_at_step: 1 / end_at_step: 10000 Note: To improve speed, I set the Windows system to "best performance," disable unnecessary background programs and applications, and lower the desktop resolution. This significantly speeds up video generation. I hope this helps.

    aiv_creatorAug 3, 2025

    doing more tests here I realized that the results are even more perfect using 3 or more steps in 'high noise'

    A_rdatyaksh_IAug 3, 2025

    this should theoretically work on i2v yes?
    🥺​pls reply or I will forget to reply my report🥺​😔​. i will test for you - 16gb vram/32gb ram

    Jesus__SkywalkerAug 7, 2025

    aiv_creator Yeah man these settings really are great though, I'm gonna try 5 high and 8 low and see how that is

    A_rdatyaksh_IAug 8, 2025

    Jesus__Skywalker 
    heres my report.
    16gb vram 4060ti oc 32gb ram ryzen 5 4.4 ghz

    480p 2.5-4 minutes 81 frames 16fps (5 second video)

    480p 7-8 minutes 129 frames 24fps (5 second video)

    480p 10-12 minutes 193 frames 24fps (8 seconds video)

    720p 15 minutes 81 frames 16fps (5 second video)
    this goes accross all models i tried.
    so far: i got 2.2 aio (not lightspeed) to work. both t2v and i2v. (im not even interested in t2v because i like to make videos of my own image creations and I cant control what comes out from t2v.
    when I found that to work, I tried 2.1, all worked 480, 720, aio's, lightspeed. besides vace - i dont like vace. the reason i bring this up is because since video ai came out, with cogxvideo, I tried cogxvideo, hunyuan, and ltx. and nothing worked properly. i struggled installing sageattn. i struggled with triton. nothing worked. all the installations people say worked. did not. cogxvideo worked for exactly one video before getting workflow broken and that video took almost 1 hour and 25 minutes on my 3060. this was when it came out last year. in comparison, 15 minutes for a 720 is like seconds passing to wait.

    your method really did work. with native comfy nodes even, i hate custom nodes they always give problems. i hate comfyui in general but im just happy it worked.

    thanks be to jesus christ for salvation
    thanks be to jesus_skywalker for ball knowledge

    WalentyAug 9, 2025· 4 reactions
    CivitAI

    Probably the video model I use the most, Super effective and easy.

    riddlezone98871Aug 10, 2025· 3 reactions
    CivitAI

    Please someone help me, it's the first time I'm using img2vid, I'm using a 480p model, but when I get to the ksampler node it crashes and tells me that the GPU runs out of memory, I have a NVIDIA GeForce RTX 4050, 16gb RAM, 5.9gb VRAM ... does anyone know what parameters I should change?

    Me too, first time, using the same workflow of the creator, I don’t know what it’s wrong🥲

    dtwr434
    Author
    Aug 11, 2025

    I've got a 3090 with 24GB of VRAM, so something to do with that probably explains why it works for me. I don't know what all is necessary to run with less VRAM, but it probably is going to involve using a GGUF instead of the fp8_scaled model I was using. And possibly the resolution needs to decrease? I'd look for workflows on civitai that advertise working with low VRAM and try one of them.

    RustRevolutionUSAug 12, 2025

    Idk if it's the same issue you're having but make sure you didn't accidentally set the batch (in the K-Sampler node) to a high number. Also, don't try to generate using a width over 832px and a height at 480px (you might be able to go higher, but this is the safe zone for 480 I2V. Also, if you're starting with a large resolution image, you may need an image resize node before the K-Sampler to shrink it down to the width/height that you have set in your K-Sampler.

    RustRevolutionUS the batch is 1 , I tried to resize the image… I tried wan 2.2 but nothing …

    kindanonAug 25, 2025

    Can you try launching comfy with the --cache-none flag?

    2775442127Sep 8, 2025

    修改分辨率

    mi922Sep 29, 2025

    maybe i'm wrong. i guess it's the size of the model. you can't load a bigger model in your rig than the Vram size.

    xephhyAug 20, 2025· 10 reactions
    CivitAI

    looks like the latest ComfyUI update broke the interpolation in your workflow? Not sure..
    "RIFE VFI VFI model RIFE requires at least 2 frames to work with, only found 1. Please check the frame input using PreviewImage."

    margus4477Aug 26, 2025

    same, no clue how to fix it

    HoneySandhuSep 19, 2025

    same here

    yanmeng1413307Oct 7, 2025

    @Rocks97 Pls tell me ,i cant fix it

    fluxxesOct 12, 2025

    can you share that wf please?

    haenlesn937Aug 21, 2025
    CivitAI

    Somehow this lora does not work quite well with Wan2.2 + Lightx2v LoRa...
    The woman does suck, but she only takes in the glans and does not go any deeper no matter the prompt...

    dtwr434
    Author
    Aug 21, 2025

    Are you using the lightx2v with the high noise steps? Wan 2.2 really works so much better without it for the high noise.

    haenlesn937Aug 21, 2025

    @dtwr434 Yes I do have both high and low noise steps. Interesting, I'll try that

    FemBroAug 25, 2025
    CivitAI

    Sadly it seems to completely freeze with at the Ksampler with 0/0 and not 0 IT/s tried twice and sat there for an hour as it didn't budge. Turned it off and went back to normal so not sure what's up.

    AbsoluteBussinAug 26, 2025
    CivitAI

    Very clean and high quality movement. The only problem I stumbled upon is the tip of the penis disappearing after she pulls it out. Any plans on making another version?

    dtwr434
    Author
    Aug 27, 2025· 1 reaction

    Yeah, that's kind of why I suggest adding in a separate penis lora (doesn't have to be at full weight) to help with that kind of thing. Not sure if I'll do updates at this time. I'd make a Wan 2.2 version, but it's just so slow to generate and the speed loras ruin it completely, so it's hard to justify even using it.

    EndlessDreamOnceHumanOct 6, 2025
    CivitAI

    Awesome. Been trying to get a difficult blow working. this is the only LoRA that worked. Works for all kinds of BJ, not just POV. I used 14B 750

    poisas69220Dec 8, 2025
    CivitAI

    Anyone could give me a lead how they make those "slow motion" videos ?

    dtwr434
    Author
    Dec 9, 2025· 1 reaction

    Not sure exactly what you mean, but depending on the version of lightning lora you're using, it slows motion down a lot. Most people see that as a bad thing, but maybe that's what you want?

    poisas69220Dec 9, 2025

    @dtwr434 i do want make videos like slow motion, got any tips where i should look in to ? I am new i spent week learning generated like 100gigs of video material to test. But i never get in to slow motion issue :D more i get too fast videos that i want to slow down. im wan 2.2 i2v

    dtwr434
    Author
    Dec 10, 2025· 1 reaction

    @poisas69220 I don't know of any proper way to get slow motion, but most likely for the preview videos, I was using a lightning lora, which is supposed to make things generate much faster, but also had the unintended side effect of slowing everything down. You can probably find the appropriate lora from here somewhere. https://huggingface.co/lightx2v. The older it is, the more it probably slows motion down, as I think they've been trying to fix that over time.

    ValuedRenderDec 10, 2025· 1 reaction

    @dtwr434 Clean model safetensors does slow motion by prompt (slow motion). If u need clean good result, try base model with applied one lora instead of using ( Remixed versions with tons of merged loras ) lighting distil lora ver works best.

    poisas69220Dec 12, 2025

    @ValuedRender could share also info : civitai is all about videos and images ? or there is like sound also ?

    ValuedRenderDec 13, 2025

    @poisas69220 Sorry what exact information are u looking about video, images, sound ?

    ValuedRenderDec 13, 2025

    One more hint to make slow motion.
    When u get video 16 fps, do interpolation ( with RIFE i guess ) to 64 fps and combine video as 32 fps so it will be slow with the 2x duration.

    poisas69220Dec 13, 2025

    @ValuedRender  My question is : this civitai.com does it have stuff (models, loras workflows etc) that is not image and videos ? for example, audio creation, music creation, text to audio stuff etc ?

    ValuedRenderDec 13, 2025· 1 reaction

    @poisas69220 I understand what you mean. CivitAI offers paid service for generating images and videos through its website. Models and loras could be base open source models developed by different teams ( if u lookins for sources of models, they all at huggingface site), also users upload own models which u can use for generation on this site. It also offers an "easy" way to train lora for videos or images. As for text and audio generations, the website doesn't offer this feature directly. However, in the articles section, you can find a lot of information about how users generate audio or text using services or programs like ComfyUI. Workfows are made by users in most cases for ComfyUI software. ComfyUI provides workflow templates for everything. So for text it's must be GPT in most cases, for audio it's huggingface site, huggingface provide playground where u can try 80% of stuff, it's also limited by day quota, and u can buy more GPU time for playing with some stuff like audio ( cloning, replacing voice, lip sync and etc.)

    poisas69220Dec 14, 2025

    @ValuedRender  Thanks for explaining im running models on my own pc 5070ti 16gb. This is my second weeek on comfiui. Still many things to learn. I found this website very helpfull to learn basic t2i and i2v generations. But recently this comfiui update messed up node manager, it wont install nodes, and there is basicaly all workflows shared here are not compatable anymore, and those nodes are not be able to install because compatability issues, so i need to downgrade versions of something that i havent learned yet :D

    ValuedRenderDec 14, 2025

    @poisas69220 ya, could be. Better option is to use the latest version of comfyUI, even if some old workflow broken you can replace "old" nodes with current analogs. Good luck

    poisas69220Dec 28, 2025· 2 reactions

    @dtwr434 been couple weeks im into comfyui and learning every day many things. Yes wan 2.2. base model + 4 step loras indead give the slowmotion effect i was looking for. also i used interpolation rife to generate extra frames and rebuild video in low fps for more slow speed. :) Thanks for help it did guide me alot. Now im into more advanced workflows. I mastered reactor node. Face swap videos face swap images. Upscaling, debluring images. Now im into qwen image edit a bit playing with few models to see. and im realy impresed buy quality output. Good luck and happy new year :D damn my in couple days i lost 800gb of space :D but still 500gb to go :D

    huuphuocmar283Jan 27, 2026· 2 reactions
    CivitAI

    RIFE VFI VFI model RIFE requires at least 2 frames to work with, only found 1. Please check the frame input using PreviewImage. How to fix?