CivArchive
    Wan POV Blowjob - v1.0
    NSFW

    Version 1.2 Update: Higher quality and expanded dataset. It can now do cumshots where the woman keeps sucking and you can see the cum dripping out of her mouth. The I2V version can handle images where a penis does not start out in the image.

    Note that all of the previews were created using the lightx2v (self forcing) lora. You may get better movements without it, but I didn't do much testing without it. The workflows I used can be downloaded by getting the "Training Data".

    T2V tips: It actually seems to still maintain motion very well at 0.7 strength, and you'll get more facial variety that way. I seem to get best results at 0.95 strength for whatever reason. You can add in a penis lora to help it out.

    I2V tips: If the penis does not exist in the starting image, you probably want to add a penis lora to your workflow so it has a better shape. You can trigger it with something like:

    A penis appears from the bottom of the frame centered at the bottom and pointed straight up.

    If you want cum to appear:

    White cum shoots out of the penis.

    This was trained on 4 different angles, here is how to trigger them:

    A woman is lying on her stomach between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.
    An overhead view of a woman kneeling between the legs of the viewer and performing oral sex on a man. She moves her head back and forth as she sucks the penis.
    A woman is leaning over a man positioned in between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.
    A woman is kneeling in between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.

    Description

    FAQ

    Comments (59)

    dream_AIMar 8, 2025
    CivitAI

    Model looks great! I can't get your workflow working in comfyui tho, having trouble installing the Wan custom nodes :(

    dtwr434
    Author
    Mar 8, 2025

    Hmm, I'm using https://github.com/kijai/ComfyUI-WanVideoWrapper, which I just git checkout into the custom_nodes directory. Though there's a native workflow as well, and I couldn't tell you which is the better one. Honestly, I would probably look into other workflows that people have uploaded if you're looking for something fancy. I tend to just use the bare bones versions with minimal changes.

    I also use some impact pack thing to enable dynamic prompts, but that's entirely optional and I wasn't even using dynamic prompts for these samples.

    azeliMar 8, 2025

    @dtwr434 My 2cent is the native one is far better now, not sure what updates they've made but I'm easily running the 720p model with only 20GB VRAM being used, no messing with block swap etc. The wrapper massively lacks stability, crashes, system freezes etc. comfy seems to manage resources much better

    720p fp8 model, t2v gen 720 x 1248, 81 frames, 30 steps currently takes 25min on a 3090, teacache + sage

    240 x 416 gen only takes 1m30s so messing with upscales currently

    dtwr434
    Author
    Mar 8, 2025

    @azeli Good to know, thanks. I'll check it out as stability issues have been a big problem for me just making these preview images.

    azeliMar 8, 2025

    @dtwr434 Would be interested to know your gen times for a full 720 x 1280 generation seeing as we have the same card

    dtwr434
    Author
    Mar 8, 2025

    @azeli How much system ram do you have? I feel like mine would just die trying something like that at 48GB of system ram. I had to install something to kill processes taking up too much memory to stop my system from freezing up as it is doing 640x480 after a few generations.

    azeliMar 8, 2025

    @dtwr434 64GB, but the native one currently as I say is much better, the wrapper was crashing my system all the time, as you say it doesn't seem to clear RAM ever resulting in lots of crashes. I had to restart comfy every time I did a new gen, none of those issues with native.

    It may not be enough, but worth having a try! 720p gen with a 2x model upscale after is insanely better. It does seem as though the v2v isn't as good as Hun for some reason with my workflow, lots of quality loss. It is a big jump though so maybe need to increase denoise

    makiaeveliMar 8, 2025

    Not like y'all need this, but I never see many people link to the "default" many seem to reference: https://comfyanonymous.github.io/ComfyUI_examples/wan/

    Like it just uses the built in KSampler, Im not sure if there are any Wan-specific nodes at all in either the I2V or T2V workflows provided. Of course we need to update comfyui so It can read the clip properly, but that was it.

    I still haven't tried, but I'd also assume smacking on any video-upscale procedure would work fine, like many have already built from Hunyuan. On my 64gb of RAM, 16gb of VRAM, Ive always struggled to upscale. In Hunyuan I was waiting 30 mins for entire upscaled videos. I dont really wanna wait like 1 hours in Wan lol

    azeliMar 8, 2025

    @makiaevelio543 Yes this is what I'm referring to, my workflow is just a massively extended version of this with a 2/3 pass latent upscale. I prefer 1min gens to see movement etc is correct before going for a bigger resolution.

    Also if you have the VRAM Comfy-org FP16 models just posted, quality is super but I can't gen a large resolution on my 3090, good job I have 2 x 5090s coming next week! Really wish the comfy guys would work on true multi gpu inference

    edit: even if you don't have VRAM for the FP16 there are new FP8_scaled which are better than the 34n3fm

    makiaeveliMar 8, 2025

    @azeli sounds like I should maybe reconsider which model Im using then, maybe Ill get improvements in speed/accuracy. I am using the official quantized 480p_Q6_K for the most part. Maybe Ill try the one theAlly posted earlier

    gambikules858Mar 8, 2025· 1 reaction

    Dont use custom nodes. Use official, everything works fine and nice result

    fronyaxMar 8, 2025

    @dtwr434 cant use GGUF on that wrapper node, i use the native one instead :)

    ToxicBotMar 8, 2025· 3 reactions
    CivitAI

    Works incredibly good for i2v. Without lora penis in input image would disappear or glitch out, with the lora though it can be on the other side of the image with just a single word prompt of blowjob the subject will find it :) Really good, opens up a lot of doors. Now, a request pls from a poor 12gb gpu, cumming lora.

    xG00N3RxMar 8, 2025· 1 reaction
    CivitAI

    Seeing her feet wiggle around in the gen of the girl with the white beanie on is convincing me to try Wan over Hunyuan 😅

    shateivai455Mar 8, 2025
    CivitAI

    Work with img2video?

    makiaeveliMar 8, 2025

    comments say so

    BBBAAA2Mar 8, 2025

    absolutely!

    blaMar 8, 2025· 4 reactions
    CivitAI

    you kidding? that's one hell of a showcase, looks better than hunyuan!

    makiaeveliMar 8, 2025

    In my tests so far, without the proposed negative, I haven't seen nearly the "infinite penis tip" as often as in Hunyuan

    gambikules858Mar 8, 2025
    CivitAI

    Excellent lora. Works better than similar on Hunyuan

    fronyaxMar 8, 2025
    CivitAI

    man it's better than the hun lora wtf 😂

    gambikules858Mar 8, 2025
    CivitAI

    God perfectly trained lora. Works with I2V very well!

    I have generated 10 vids, 10/10 perfect.

    Generated 544p 49 frame 20 steps. On my 3060 12go. 5 min.

    Kyper921Mar 9, 2025

    mind sharing your workflow?

    gambikules858Mar 9, 2025

    @Kyper921 official workflow. just connect lora

    iodrg244Mar 8, 2025
    CivitAI

    Whatever settings you're using to train these Loras, they're great. in WAN i2v (480p) it maintains the original subject's face/features while using your lora's animations. Even at 1.0 strength. Nice job!

    winifredslack61733Mar 8, 2025· 3 reactions
    CivitAI

    Thank you. Any chances for handjob lora?

    aipinups69Mar 8, 2025
    CivitAI

    Very good! Could I ask a bit about the training data? As I'm looking to train some Loras myself and so far have only used images:

    How many videos did you use?
    How long were they?
    What resolution were they?

    Thanks!

    dtwr434
    Author
    Mar 8, 2025· 2 reactions

    Download the training data from my POV missionary lora, I used the same method and settings. I use diffusion-pipe on a 24GB card. https://civitai.com/models/1331682/wan-pov-missionary

    The video files themselves are 3 seconds long at 480p resolution, but I specify 244 as the resolution in the config file, and 32 frames. For this one, I used 25 video clips and 5 high resolution images.

    aipinups69Mar 8, 2025

    @dtwr434 Thanks!

    rowmoz497Mar 8, 2025
    CivitAI

    Amazing lora. One of the est ever for videos. Pls let us know your settings and training tools. pls?

    dtwr434
    Author
    Mar 8, 2025· 2 reactions

    Download the training data from my POV missionary lora, I used the same method and settings. I use diffusion-pipe on a 24GB card. https://civitai.com/models/1331682/wan-pov-missionary

    js45819Mar 8, 2025
    CivitAI

    What are my options for using this lora if I have a 12 GB 3080TI?

    js45819Mar 8, 2025

    To clarify, I'm currently using the 1.3B model and swarmui says it's not compatible. Is there another way I can run the 14B model or another way to use the lora?

    dtwr434
    Author
    Mar 8, 2025

    I think it's possible to do this on 12 GB, but I have no experience with it, so hopefully someone else can give a more specific answer. You definitely need to use a 14B version of the model, but I'm guessing it involves using one of the GGUFs people have made. Also, don't try to use my included workflow as it won't work with GGUFs.

    js45819Mar 8, 2025

    @dtwr434 Thanks, good to know. So loras for the 14B base model should work with the 14B GGUF models?

    dtwr434
    Author
    Mar 8, 2025

    I've heard mixed things, but some people seem to get it working. I haven't tried myself.

    js45819Mar 8, 2025

    @dtwr434 Okay, I'll have to do some experimenting. Thanks for the information!

    hiben40387Mar 8, 2025

    @js45819 I am doing it with 12gb, I use the GGUF quants Q4 or Q5 is good for 12 with pretty good quality. Make sure to not have torch compile on if you are using it, I think LoRA break with it on GGUF

    js45819Mar 8, 2025

    @hiben40387 Yes, I was also just able to get it to work with a GGUF Q4 quant. I'm not sure what torch is, but I guess that means I'm not using it. I'm still learning a lot so I appreciate the information!

    Melty1989Mar 9, 2025

    @js45819 I have a 3080 (10GB) and this works just fine with the 480p i2v model. I’m not using the quantised version

    Artificer_Mar 9, 2025
    CivitAI

    This works amazingly. Thank you for creating and sharing this OP!! I'm going to continue testing, but from what you've seen is it best to start with an image of the penis out of the mouth or already in the mouth?

    mweldonsd594Mar 10, 2025· 12 reactions
    CivitAI

    We've done it, people! This is it! No need to go any further. With this Lora, human civilization has finally reached the highest attainable goal.

    Seriously, though, this is really good.

    logenninefingers888Mar 18, 2025· 2 reactions

    They promised us hoverboards, AGI and self driving cars. We got this instead, so I think we did ok ;)

    Artificer_Mar 18, 2025· 4 reactions

    100% in agreement

    Mekichan666May 11, 2025

    @logenninefingers888 yes world at wars, but we have this. so it's aight

    co773c710n5Mar 10, 2025
    CivitAI

    how did you train it? could you link the repo please, thanks? or documentation or code scripts etc? would love to try it on my own.

    dtwr434
    Author
    Mar 10, 2025

    I use diffusion-pipe: https://github.com/tdrussell/diffusion-pipe

    If you download the training images from my pov missionary lora, it has my method and training scripts: https://civitai.com/models/1331682/wan-pov-missionary

    tacocatMar 10, 2025· 3 reactions
    CivitAI

    Anyone get these loras to work with deepbeepmeep/pinokio WAN2.1? I can load them with the i2v 14B model, but the results are odd. (she usually bites the head off. lol)

    and I don't like Comfy, which is why I use the pinokio implementation of WAN2.1

    hishiryoMar 13, 2025

    I don't think this is for 14B.

    pivofin991709Mar 12, 2025· 1 reaction
    CivitAI

    The model is amazing, but I am having problems with face modification in I2V.

    Would it be possible to have an I2V model trained with the faces somehow cut off or blurred?

    (the bouncing boobs guy did it and it works amazing with faces)

    dtwr434
    Author
    Mar 12, 2025· 1 reaction

    You can't really mask or blur faces with this because faces are so critical to the movement itself. Have you tried lowering the lora strength? I'm not sure how low you can go, but try to lower it as much as possible without ruining the motion.

    WhatTheGuyMar 13, 2025
    CivitAI

    OMG I totally missed that there is a new WAN filter now. Wondered why nothing is done with WAN in the last days xD Looks really good! Too bad WAN is so slow compared to Hunyuan >_<

    WhatTheGuyMar 15, 2025
    CivitAI

    Somehow when using your workflow 16 fps seems to be the right fps instead of 24 fps ( with standart comfyUI workflow), which should be the standart for the 14B model. But can't find a specific setting for that in your workflow. Maybe teacache or Enhance A Video mess is up ?

    dtwr434
    Author
    Mar 15, 2025

    I'm not sure what you're asking exactly, but 16 fps is correct, and you can see that in the Video Combine node at the end of the workflow. There are ways of increasing the FPS afterward, but I don't include that in my workflow because loading the model to do that after each video causes a memory leak and things end up dying on me.

    If you want to know the current workflow I'm using, it's included as training images in the cowgirl lora I recently uploaded. It includes sage attn and upscaling and things like that.

    My plan is to take the videos it creates and load a separate workflow to double the FPS afterward, but I haven't put that together yet.

    WhatTheGuyMar 15, 2025

    @dtwr434 ah ok I will clarify what I meant: Yes I saw your video combine in your workflow. It is at 16 fps and while playing the video it has a normal speed. 24 fps in the video combine would look speed up. But when I'm using the comfyUI workflow (also the same 14B model and your Lora) , then 24 fps in the video combine looks like the correct speed, 16 fps looks slowed down. So the comfyUI workflow produced a video that is meant to be played at 24 fps, and your workflow produces a video that is meant to be played at 16 fps.

    dtwr434
    Author
    Mar 15, 2025

    Hmm, Wan should always be at 16 fps, so that's pretty surprising. Can you link the comfy workflow you're talking about?

    WhatTheGuyMar 15, 2025

    @dtwr434 @dtwr434 it's this one https://comfyanonymous.github.io/ComfyUI_examples/wan/ . I just realised there it is also set to 16 fps. I copied my video combine from my Hunyuan workflows, so I didn't realise the change in fps. Hmm.. I only used WAN with your Cumshot Lora. Can it be that there is a lot of slowmotion content, so the slowmo + my 24 fps just resulted in normal speed videos again =D ? I converted the old cumshot videos to 16 pfs and looked at them side by side. 24 fps really feels a bit too fast im comparisson now with most of them. I guess I just got tricked by some slomo outputs that looked right with 24 fps and then didn't spot the actuall too fast ones ... Ah, I also managed to get outputs that looked right on 16 fps on the comfiUI workflow. So in the end it seemed to be just a bit confusion on my side ;) Maybe part of the problem is also that some people train their loras on 24 ( or 30 ) fps videos and with that training slomotion into their lora which then looks normal speed at 24 again

    dtwr434
    Author
    Mar 15, 2025

    @WhatTheGuy Alright. Just so you know, I don't use the workflow I uploaded with this lora anymore. I'm using something based off of this one here https://civitai.com/models/1295981/wan-video-t2v-upscaling-and-frame-interpolation. It includes a way of boosting the frame rate, so you're not stuck with 16fps, but it requires loading a whole separate model. I removed that part because loading the models over and over results in a memory leak on my system. My plan is to just generate a bunch of 16fps videos, and then as a separate workflow, batch them into the thing being used there to make them higher fps.

    SD_AI_2025Mar 30, 2025

    Wan IS 16fps. Nothing else. Read release papers of models before looking for something that does not exist. There are frame interpolation nodes in ComfyUI. Or use Topaz Video AI, etc.

    LORA
    Wan Video

    Details

    Downloads
    8,126
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    3/8/2025
    Updated
    4/21/2026
    Deleted
    4/13/2026

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.