Hello every one!!! I have borrowed this workout from another user. The original workflow could not be found, but at the request of some users, I am posting it on my page. I have used GGUF models and added some lore for each stage of this workflow. This workflow works great on my 3060ti 8GB VRAM and 32GB RAM. It takes approximately 30 minutes to generate a 17-second clip. Name of original workflow: "The Complete Blowjob Story Workflow From Undressing to Facial"
Description
FAQ
Comments (64)
NSFW-22-H-e8, NSFW-22-L-e8 <-- where down?
@Zavrik Thx.
The connection between video 2 and video 3 feels unnatural, and video 3 is repeated twice, but the connection between repetition 1 and repetition 2 is also unnatural. It feels like the frame breaks and connects three times in total. Is there a way to fix this?
@aimodelfree I also noticed this effect. It happens because when a clip ends, an image of the last frame is generated and used to create the next video. As for the third clip repeating twice, that is intentional, as I understand it, to extend the length of the scene.
I'm currently figuring out how to fix the smoothness between clips. One thing we can try is to use the same base prompt everywhere and add details and actions in each subsequent clip.
When I try to use this workflow the sheduler on the KSamplers turns red. Any solutions?
can you send a screenshot?
@stempskibartosz340 A version conflict or node incompatibility has occurred in your workflow (most likely in ComfyUI). One node (probably a custom or updated one) uses a new type of parameter called 'beta57' for the scheduler, while other, older KSamplerAdvanced nodes are unaware of its existence and expect to receive only the old set of values. (to fix errors i use deepseak)
Hey, I had the same issue and found the solution : disconnect all links coming from the scheduler (on the left part of the workflow in the "Sampler Scheduler Settings" node), those are connected to KSampler nodes. Right clic on the green dot near "simple" and click "disconnect links". Make sure in all KSampler nodes that "simple" is chosen. It should make it work.
@arseur Thanks, I saw the same issue today and the workaround worked.
hy im a beginner, knows only basics but this Works great on my rtx 3060 12gb, 16gb ram. takes around 27mins for 20 sec video(crashes sometimes bcoz of low ram too). but sometimes it changes the face a little maybe its coz of the low vram models, it will be top notch if no face alteration and top quality output, but i guess this is what i get in this gpu.
mat1 and mat2 shapes cannot be multiplied (154x768 and 4096x5120). any solutions?
I switched Load CLIP to nsfw_wan_umt5-xxl_fp8_scaled.safetensors and it resolved that issue for me.
@duncanhines428 you're the best
how do i make it so the face stays consistent? i see theirs a seed everywhere node, its connected. all settings are default. do i manual put a number for the seed everywhere node and the ksamplers? and the nodes that seed everywhere are connected to? make them the same number?
did that help?
seed should be the same number in all KSamplers.. and you can also use a Wan Character LoRA for your subject (you will have to find one, or train it yourself), this will help keeping facial consistency throughout generation/s
Hello, I get the error:
KSamplerAdvanced
mat1 and mat2 shapes cannot be multiplied (154x768 and 4096x5120)
It seems like a mismatch of models, but I use exactly the ones indicated in your workflow
I've use Wan2.2-I2V-A14B-HighNoise-Q3_K_M.gguf has high model because the link https://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF/tree/main/HighNoise is missing the one written in your workflow
I switched Load CLIP to nsfw_wan_umt5-xxl_fp8_scaled.safetensors and it resolved that issue for me.
@warriors666 Hi, i'm new to this Site and trying this one out, but i couldn't find the CLIP you mentioned. Can you provide me the link for the CLIP?
@warriors666 Nevermind kind sir. I found the link.
https://huggingface.co/NSFW-API/NSFW-Wan-UMT5-XXL/blob/main/nsfw_wan_umt5-xxl_fp8_scaled.safetensors
This is for other users. Have fun!
If you want the face to stay consistent, click on the two Lightx LORAs on the left and press ctrl-b to disable them. For whatever reason that LORA likes to mess with the clip generation occassionally.
There are four preview areas in the workflow. The video in the first preview area is the shortest and the action is the simplest. I spent a long time running through the four preview areas, but the video I got in the end was from the first preview area, which is short and has less content. I couldn't get the video from the fourth preview area, so I can only watch it. How can I choose to get the video from the fourth preview area?
AV_FaceDetailer node is missing and is not available in ComyUIManager, how can we install it?
Fantastic workflow, although is there a way to only start the generations from an individual step in the process? Don’t want to have to start the whole clip series from the beginning if there’s a particular segment I’m not happy with and want to try to regenerate. Any insight would be appreciated.
I don't know how to do it from this particular workflow, but I split it into 4 discrete workflows that can generate the clip from the input image
workflow 1 (the undressing)
workflow 2 (the man)
workflow 3 (the blowjob)
https://pastebin.com/cMKpcNSU (note i removed the default positive prompt b/c of flagging)
workflow 4 (the climax)
https://pastebin.com/1wgCEu25 (again removed positive prompt)
you can stitch them together using free video software
anyone know why this would come out very blurry?
Make sure all loras are updated and enabled. I got blurry output when bypassing the two lora models on the left.
感些分享!
its not working for me. Its just generating one clip and then stops
This worked very well for me, only it took far longer than expected.
It took 1h 50min for 14secs of video.
Working 1 at a time, I can usually generate a 6 sec clip in about 8min
Any idea how I can get a longer vid in less time with this WF, or not possible?
Since you did this recently. You didn't run into the "mat1 and mat2 shapes cannot be multiplied" error?
On my 5060 it takes about 15 minutes, on the 3060 it took about 20-25 minutes.
I get this error between the 2 Ksamplers in the third section...."CUDA error: invalid argument CUDA kernal errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with "TORCH_USE_CUDA_DSA" to enable device-side assertations"
any advice?
never mind...I fixed it.....
amazing workflow. took some setting up but once setup it works great. is there a simple way to adjust the length of each section/slip? i wouldnt mind going for a 30 second total clip as it only takes like 10 minutes or less to do the current one on a a rtx5090
I can't seem to fix this error no matter what I do
linear(): input and weight.T shapes cannot be multiplied (154x768 and 4096x5120)
Any ideas?
THANK YOU. You're the best. This process is the best I've ever encountered. It's the first one that works well on a 3090 with 24GB VRAM and 48GB RAM.
I'm trying to create a great workflow, but it says I need the following nodes in ComfyUI, and I don't know which ones they are:
・Text Box
・Integer to String
I've searched but can't figure out which nodes are correct, and I'm stuck.
Couldn't find either, replaced with String and Convert Any from comfyui-easy-use node pack.
Same issue here, any updates?
@gamexminer1667 thank you
Complete beginner was able to set it up and work through errors in a few hours! 3070 and 32GB RAM took 25 mins to generate 16s clip
Just when I thought this couldn’t get any more fun I discover your workflow, making five second videos that are pretty good is fun making 15 second videos turns out to be three times as much fun. Thank you for making this
RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 4096x5120) - edit : It's a fucking complicated mess, but it works so well - in the end -
The guy who made this WF is a god.
THX at duncanhines428 --->
I switched Load CLIP to nsfw_wan_umt5-xxl_fp8_scaled.safetensors and it resolved that issue for me.
Thx dude! You saved my day!
Thank you !!!
Hi ! I've a problem to run the workflow : on the "Comfyui_LG_Tools" boxes (🎈LG_图像选择器), the modes presets does not correspond to what is available. The modes are set on "-2" but is not recognized (boxes are in red when running the workflow). Always_pause, keep_last_selection and passthrough are the modes available. Could you help me please ?
Do you solve?
@202240230 nope, i'm a beginner on comfyui workflow. Gemini helped me to understand the purpose of the preset modes, I've done a few tests with each modes but the workflow seems to run over and over with no evolution at this step
@invinciblescud450 Alright, thanks for your response.
Replace every one of the LG_image selector nodes, with the KJNodes "Get images from Batch" node. in the text field, put -1 and it will run the workflow smoothly from section 1, to 4.
@202240230 see my comment, friend
@Ttehk OK,I will try
Thx it works great, the only problem i have is after first clip ends he always changes the face of the generated input image. Can this be solved?
try making some lora of the specific people you're making, could significantly mitigate that problem
need help, when i press run it says error node 61?
the best method I found is to copy your comfyui log in a text file (make sure it contains the error), upload it to google Gemini, and upload your workflow as well to Gemini, and ask it to help you fix the issue step by step.. good luck
to anyone else having issues with the LG node. Replace every one of the LG_image selector nodes, with the KJNodes "Get images from Batch" node. in the text field, put -1 and it will run the workflow smoothly from section 1, to 4.
thank you!
FOR LOW VRAM 8-2:
Use Wan 2.2 Remix , and the Loras use the Dr3aml4y, oral insertion and fac3cream, excellent results.
Anyone run into the scheduler issue?
KSamplerAdvanced 485:
Return type mismatch between linked nodes: scheduler,
received_type(['simple', 'sgm_uniform', 'karras', 'exponential', 'ddim_uniform', 'beta', 'normal', 'linear_quadratic', 'kl_optimal', 'bong_tangent'])
mismatch input_type(['simple', 'sgm_uniform', 'karras', 'exponential', 'ddim_uniform', 'beta', 'normal', 'linear_quadratic', 'kl_optimal', 'bong_tangent', 'beta57'])
Stupid question like this but I assume for me to make it a different video, I just need to change the prompts ? Also, how simple is it to just have the point of start coming from an image ?
All I ever get is a static image