Version 1.2 Update: Higher quality and expanded dataset. It can now do cumshots where the woman keeps sucking and you can see the cum dripping out of her mouth. The I2V version can handle images where a penis does not start out in the image.
Note that all of the previews were created using the lightx2v (self forcing) lora. You may get better movements without it, but I didn't do much testing without it. The workflows I used can be downloaded by getting the "Training Data".
T2V tips: It actually seems to still maintain motion very well at 0.7 strength, and you'll get more facial variety that way. I seem to get best results at 0.95 strength for whatever reason. You can add in a penis lora to help it out.
I2V tips: If the penis does not exist in the starting image, you probably want to add a penis lora to your workflow so it has a better shape. You can trigger it with something like:
A penis appears from the bottom of the frame centered at the bottom and pointed straight up.If you want cum to appear:
White cum shoots out of the penis.This was trained on 4 different angles, here is how to trigger them:
A woman is lying on her stomach between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.An overhead view of a woman kneeling between the legs of the viewer and performing oral sex on a man. She moves her head back and forth as she sucks the penis.A woman is leaning over a man positioned in between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.A woman is kneeling in between the legs of the viewer and performing oral sex on a man. Her head moves up and down as she sucks the penis.Description
Higher quality training data and expanded dataset
FAQ
Comments (49)
these look wan-derful
the sequel we've all been waiting for
我使用起來會改變陰莖以及女人的樣子,似乎強度不能開太高,且無法深喉。
我也是这样,有什么解决方法吗
2410747553660 目前我就是降低權重,且加入其他lora。 沒別的辦法,希望能看到作者可以更多的修正。
It's hard to make the details of the woman's cheeks denting when she sucks. Am I forgetting something?
Hmm, not sure. It always just kind of happened automatically for me. Did you try my example prompts? Are you using T2V or I2V?
is there a specific command/trigger for getting the cum while sucking effect, thanks looks great.
I think any mention of "white cum" will do it. In hindsight, I maybe shouldn't have mentioned the color in the captioning, because it doesn't seem to work very well if you just say cum by itself.
IMPORTANT: INFORM ALL WAN USERS! I DISCOVERED HOW TO USE LORAS 'NSFW/SFW' IN WAN 2.2 T2V, GETTING THE FULL POTENTIAL OF "HIGH NOISE" AND "LOW NOISE." THE RESULTS WERE IMPRESSIVE!! I ran it on an RTX 3060 6GB, 32MB RAM, 480x480, length: 41, generation time: 5 minutes to generate each video. The quality is fantastic! For those of you with more powerful computers, the results will be even more incredible. I used the models: WAN 2.2 - Q4_1.gguf FOLLOW THE STEPS BELOW: 1: Don't use acceleration/enhancement LoRas (such as Lightx2v or other) on "high noise"; use only on "low noise" with strength 1. (I'm using this version of Lightx2v LoRa): lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank64_bf16 2: Use 'NSFW/SFW' LoRas on both "high noise" and "low noise" models. On "high noise," use strength 2, and on "low noise," use half the strength, with strength 1. You can vary the strengths as you like, but always keep the "low noise" strength at half the strength of the "high noise." 3: Use these settings in KSamplers: "high noise": steps: 9 / cfg: 3.5 / euler/simple / start_at_step: 0 / end_at_step: 1 "low noise": steps: 9 / cfg: 1.0 / euler/simple / start_at_step: 1 / end_at_step: 10000 Note: To improve speed, I set the Windows system to "best performance," disable unnecessary background programs and applications, and lower the desktop resolution. This significantly speeds up video generation. I hope this helps.
doing more tests here I realized that the results are even more perfect using 3 or more steps in 'high noise'
this should theoretically work on i2v yes?
🥺pls reply or I will forget to reply my report🥺😔. i will test for you - 16gb vram/32gb ram
aiv_creator Yeah man these settings really are great though, I'm gonna try 5 high and 8 low and see how that is
Jesus__Skywalker
heres my report.
16gb vram 4060ti oc 32gb ram ryzen 5 4.4 ghz
480p 2.5-4 minutes 81 frames 16fps (5 second video)
480p 7-8 minutes 129 frames 24fps (5 second video)
480p 10-12 minutes 193 frames 24fps (8 seconds video)
720p 15 minutes 81 frames 16fps (5 second video)
this goes accross all models i tried.
so far: i got 2.2 aio (not lightspeed) to work. both t2v and i2v. (im not even interested in t2v because i like to make videos of my own image creations and I cant control what comes out from t2v.
when I found that to work, I tried 2.1, all worked 480, 720, aio's, lightspeed. besides vace - i dont like vace. the reason i bring this up is because since video ai came out, with cogxvideo, I tried cogxvideo, hunyuan, and ltx. and nothing worked properly. i struggled installing sageattn. i struggled with triton. nothing worked. all the installations people say worked. did not. cogxvideo worked for exactly one video before getting workflow broken and that video took almost 1 hour and 25 minutes on my 3060. this was when it came out last year. in comparison, 15 minutes for a 720 is like seconds passing to wait.
your method really did work. with native comfy nodes even, i hate custom nodes they always give problems. i hate comfyui in general but im just happy it worked.
thanks be to jesus christ for salvation
thanks be to jesus_skywalker for ball knowledge
Probably the video model I use the most, Super effective and easy.
Please someone help me, it's the first time I'm using img2vid, I'm using a 480p model, but when I get to the ksampler node it crashes and tells me that the GPU runs out of memory, I have a NVIDIA GeForce RTX 4050, 16gb RAM, 5.9gb VRAM ... does anyone know what parameters I should change?
Me too, first time, using the same workflow of the creator, I don’t know what it’s wrong🥲
I've got a 3090 with 24GB of VRAM, so something to do with that probably explains why it works for me. I don't know what all is necessary to run with less VRAM, but it probably is going to involve using a GGUF instead of the fp8_scaled model I was using. And possibly the resolution needs to decrease? I'd look for workflows on civitai that advertise working with low VRAM and try one of them.
Idk if it's the same issue you're having but make sure you didn't accidentally set the batch (in the K-Sampler node) to a high number. Also, don't try to generate using a width over 832px and a height at 480px (you might be able to go higher, but this is the safe zone for 480 I2V. Also, if you're starting with a large resolution image, you may need an image resize node before the K-Sampler to shrink it down to the width/height that you have set in your K-Sampler.
RustRevolutionUS the batch is 1 , I tried to resize the image… I tried wan 2.2 but nothing …
Can you try launching comfy with the --cache-none flag?
修改分辨率
maybe i'm wrong. i guess it's the size of the model. you can't load a bigger model in your rig than the Vram size.
looks like the latest ComfyUI update broke the interpolation in your workflow? Not sure..
"RIFE VFI VFI model RIFE requires at least 2 frames to work with, only found 1. Please check the frame input using PreviewImage."
same, no clue how to fix it
same here
@Rocks97 Pls tell me ,i cant fix it
can you share that wf please?
Somehow this lora does not work quite well with Wan2.2 + Lightx2v LoRa...
The woman does suck, but she only takes in the glans and does not go any deeper no matter the prompt...
Are you using the lightx2v with the high noise steps? Wan 2.2 really works so much better without it for the high noise.
@dtwr434 Yes I do have both high and low noise steps. Interesting, I'll try that
Sadly it seems to completely freeze with at the Ksampler with 0/0 and not 0 IT/s tried twice and sat there for an hour as it didn't budge. Turned it off and went back to normal so not sure what's up.
Very clean and high quality movement. The only problem I stumbled upon is the tip of the penis disappearing after she pulls it out. Any plans on making another version?
Yeah, that's kind of why I suggest adding in a separate penis lora (doesn't have to be at full weight) to help with that kind of thing. Not sure if I'll do updates at this time. I'd make a Wan 2.2 version, but it's just so slow to generate and the speed loras ruin it completely, so it's hard to justify even using it.
Awesome. Been trying to get a difficult blow working. this is the only LoRA that worked. Works for all kinds of BJ, not just POV. I used 14B 750
Anyone could give me a lead how they make those "slow motion" videos ?
Not sure exactly what you mean, but depending on the version of lightning lora you're using, it slows motion down a lot. Most people see that as a bad thing, but maybe that's what you want?
@dtwr434 i do want make videos like slow motion, got any tips where i should look in to ? I am new i spent week learning generated like 100gigs of video material to test. But i never get in to slow motion issue :D more i get too fast videos that i want to slow down. im wan 2.2 i2v
@poisas69220 I don't know of any proper way to get slow motion, but most likely for the preview videos, I was using a lightning lora, which is supposed to make things generate much faster, but also had the unintended side effect of slowing everything down. You can probably find the appropriate lora from here somewhere. https://huggingface.co/lightx2v. The older it is, the more it probably slows motion down, as I think they've been trying to fix that over time.
@dtwr434 Clean model safetensors does slow motion by prompt (slow motion). If u need clean good result, try base model with applied one lora instead of using ( Remixed versions with tons of merged loras ) lighting distil lora ver works best.
@ValuedRender could share also info : civitai is all about videos and images ? or there is like sound also ?
@poisas69220 Sorry what exact information are u looking about video, images, sound ?
One more hint to make slow motion.
When u get video 16 fps, do interpolation ( with RIFE i guess ) to 64 fps and combine video as 32 fps so it will be slow with the 2x duration.
@ValuedRender My question is : this civitai.com does it have stuff (models, loras workflows etc) that is not image and videos ? for example, audio creation, music creation, text to audio stuff etc ?
@poisas69220 I understand what you mean. CivitAI offers paid service for generating images and videos through its website. Models and loras could be base open source models developed by different teams ( if u lookins for sources of models, they all at huggingface site), also users upload own models which u can use for generation on this site. It also offers an "easy" way to train lora for videos or images. As for text and audio generations, the website doesn't offer this feature directly. However, in the articles section, you can find a lot of information about how users generate audio or text using services or programs like ComfyUI. Workfows are made by users in most cases for ComfyUI software. ComfyUI provides workflow templates for everything. So for text it's must be GPT in most cases, for audio it's huggingface site, huggingface provide playground where u can try 80% of stuff, it's also limited by day quota, and u can buy more GPU time for playing with some stuff like audio ( cloning, replacing voice, lip sync and etc.)
@ValuedRender Thanks for explaining im running models on my own pc 5070ti 16gb. This is my second weeek on comfiui. Still many things to learn. I found this website very helpfull to learn basic t2i and i2v generations. But recently this comfiui update messed up node manager, it wont install nodes, and there is basicaly all workflows shared here are not compatable anymore, and those nodes are not be able to install because compatability issues, so i need to downgrade versions of something that i havent learned yet :D
@poisas69220 ya, could be. Better option is to use the latest version of comfyUI, even if some old workflow broken you can replace "old" nodes with current analogs. Good luck
@dtwr434 been couple weeks im into comfyui and learning every day many things. Yes wan 2.2. base model + 4 step loras indead give the slowmotion effect i was looking for. also i used interpolation rife to generate extra frames and rebuild video in low fps for more slow speed. :) Thanks for help it did guide me alot. Now im into more advanced workflows. I mastered reactor node. Face swap videos face swap images. Upscaling, debluring images. Now im into qwen image edit a bit playing with few models to see. and im realy impresed buy quality output. Good luck and happy new year :D damn my in couple days i lost 800gb of space :D but still 500gb to go :D
RIFE VFI VFI model RIFE requires at least 2 frames to work with, only found 1. Please check the frame input using PreviewImage. How to fix?