UPDATE: v1.1
v1.1 has significantly better thrusting motion from the guy, overall I think it is much better than v1.0
All the showcase videos were generated image2video with 10 steps with the prompt:
"Pl0wView. Video of a woman being fucked doggy style by a man from behind. Her face expressing intense pleasure and moaning. The man aggressively thrusts deep and hard into her rhythmically causing the woman's ass and breasts to bounce and jiggle in rhythm with his deep and hard thrusts."
v1.1 was trained on the 480p i2v 14B model with diffusion-pipe on 22 videos for 30 hours on two 4090s.
-v1.0 - WORK IN PROGRESS
This model is for doggy style action with the woman facing the camera. In my testing I found the faces generally turn out pretty good, however control of the man's movement needs work. I will be working on improving the data and coming out with v1.1 sometime. This initial training run took 22 hours on two 4090s.
Trigger word is: Pl0wView
Try it out and let me know what you think. Feedback on what works well and what sucks is appreciated. I generated about 50 videos in testing but it takes a long time so I haven't tried out everything.
Description
Initial training run. WIP
FAQ
Comments (36)
can you share which guide you followed to get lora training working?
I just read through the diffusion-pipe documentation and also someone posted their Wan LoRA's dataset's .toml files on Civit. (Unfortunately I don't remember who or which model they are posted on.) I then used Claude to ask questions for things I didn't understand and to make sure my dataset files were following the documentation & examples correctly.
Would you consider using the base model name in the name of the LoRA? Please?
Sure, I can in the future, but I am curious what purpose does that have?
@lazerblazer sorting, although most probably use subfolders at this point
@makiaevelio543 Ah gotcha. I use subfolders. You can always just rename the file one you download it to fit whatever sorting structure you want.
@lazerblazer - short answer, it helps with organization and it takes zero time to do it.
Long answer:
I've been collecting and training models since SD1.4. I have terabytes of models. Until very recently it was very easy to download models and let Forge or Swarm sort them. Forge with civitAI helper works very well to keep bases and LoRAs matched up. Swarm's metadata function for scanning civit is subpar, but that isn't the reason.
The reason is multi fold for me, but when I talk to other users and creators most express similar discontent.
I have VERY slow internet. I open the Wan LoRA page after 2 days of not looking, and I see 4 LoRAs I want, so I open each in a tab. I then check HY and Flux... and I open tabs for them. Each download takes some time. I do not sit and watch my downloads happen. It takes a long time. So I just hit download on the loras I want.
I don't sort my downloads folder immediately just because I happened to download some loras, and the timing will never be perfect because if I have to download 3gb of loras I will inevitably be busy when they are finished. I may not sort my DLs for a few days under some circumstances. It is not workable or feasible for me to download only HY loras right now, then wait for them to download, then sort them accordingly or rename them or move them to appropriate folders. Even if I try to do that it still means that all my models, wherever they are, are still unidentifiable. - if I don't download that model immediately, it may not be there when I check back next time -.
My DL folder now has flux, HY, and Wan LoRAs in it. The sizes all vary. Some folks are still using high dim/rank for flux and produce 300mb LoRAs. Some folks using HY have wised up and produce 58mb LoRAs. Wan LoRAs are different sizes apparently as well. If the name of the base is not in the file name, I have to spend a bunch of time searching civit for model names just to sort them into folders. It makes searching useless too. I have SDXL, Flux, HY, and Wan LoRAs of the same things already. If I search my drive for "emma safetensors" I get all 4 model types in my results.
I have always (tried) to label all the models I create with the base model names. SDXL, Flux, HY, and WAN are all very small words. I have LoRAs that I've created of the same subject for all three of the former, and I'm just training my first Wan LoRA right now, and it's named "brooke-shields-wan", and I already have multiple flux and SDXL LoRAs of her, as well as my own HY. There is absolutely no reason to upload a file to Civit named "lora_diffusers_epoch80.safetensors" or "N4t4l13-r2-stighooo6.safetensors". Whereas "emma-watson-flux_000085.safetensors" is great and useful and takes no more time or effort than letting the name be useless and incomprehensible. Simple as.
I've failed to write a script that can access the metadata of all the LoRAs and label or sort them. I have not found any way to manage this problem. If you have a solution I'm all ears.
Trying to use Forge or Swarm to sort them doesn't work. I have about 10k flux loras alone, and by now with training 24/7 on 2 rigs I have like 1500 HY LoRAs. As it is now, my swarm LoRA folder has Flux, Wan, and HY LoRAs in it, and despite my folders for organization, it is extremely difficult not to make errors when moving models.
It just makes sense to toss in "wan" or "flux" in the name. It's imminently logical and takes no time and hurts nothing.
Thanks for coming to my Ted talk.
@leisure_suit_larry In the most respectful manner, you know you can right-click on the download link and hit save as and add "wan" or whatever you wish to the filename before you download it? You don't have to wait for it to download to name/sort it however you want. I can't edit filenames for what is already uploaded, I will include wan in future files.
@lazerblazer - why would I do that?
I don't want to rename any of my downloaded files at all, ever. Keeping the OG files intact is the only way to be sure of what they are in the future. That's not something I'm going to change because other people won't name their uploads logically. I'm also not going to select a specific download location for every file I download, because that's just asinine and tedious.
I'm going to continue to name all of my models with the base model and a useful name that identifies the file. It just makes sense. It has always made sense. It has never not made sense.
I download upwards of 50 models a day sometimes. Renaming them or downloading the different base model loras into different folders is highly inconvenient and requires lots of unnecessary direct interaction with every single download. Asking creators to name their files in a logical fashion isn't asking too much.
I did use the words "consider' which means just think about it, and then I used "please".
You don't have to do anything for me, I was just pointing out something that makes no sense to me.
If you're going to ask me "Why would I do that?"
I'm asking you "Why WOULDN'T you do that?
@leisure_suit_larry Edit: I am probably just misunderstanding you. Wan will be on the next filename!
@lazerblazer - I did not mean to cause any tension or anything... I'm just a verbose guy. Why use few word when many word do trick?
I train a lot, and if I didn't name my LoRAs for the base I'd be lost. My HY LoRAs are all celeb-name-or-concept-hunyaun.safetensors.
You are just one creator, and it isn't fair for me to single you out. There's no way anyone could communicate with all the creators and get them on the same page.
My comment was unnecessary and actually kind of snide and rude.
I apologize.
@leisure_suit_larry I went through the same process with a creator that would create many versions of the same lora but use a non-descript name. On one hand the creator should want their content nicely named so people are capable of referencing it in the future, but on the other I didnt make it
@makiaevelio543 - yeah I'm not here to teach logic, I was really just bitching in a snide way and should have abstained from commenting.
It's just... I train constantly, and I have to input a name for the files every time I train something.
Why not use a helpful name at that point? Just type in something logical.
"captain_fuckwad_wan_v2" is just not hard to type, so I don't get it at all.
Maybe I'm just a kooky nutjob on the internet who knows nothing about anything...
leisure_suit_larry Hey, when or if civitai does go down in flames along with its NSFW content we need to look you up! or form a secret group! lol I only have a terabyte of models. Like you, got obsessed since SD 1.5 and video models are blowing my mind. They use to be complete shit like a 1-2 years ago. I'm going to need a 10TB HDD soon. I do all my generating on a NVME, though. Much faster.
Prompt ideas?
The first video on the model showcase includes the full prompt info that was used. I used the same prompt for every video I posted initially:
"Pl0wView. Video of a woman being fucked doggy style by a man from behind. Her face expressing intense pleasure and moaning. The man aggressively thrusts deep and hard into her rhythmically deep and hard causing the woman's ass and breasts to bounce and jiggle in rhythm with his deep and hard thrusts. The man's head and face are not in frame and cannot be seen."
@lazerblazer Thanks! :)
@noyboy np! I would highly recommend playing around and experimenting with the prompts. Re-reading my prompt, it is pretty bad. I didn't mean to have "deep and hard" repeated 3 times and those words weren't consistently the training data. I was trying see how aggressive I could make the man go at it haha.
@lazerblazer Speaking of this... If you decide to do a new version, it'd be great to get a wide variety of movement (angles are already great) with appropriate captioning (slow and gentle, hard and fast, etc.)
@sarashinai I totally agree. I didn't include enough of the slow/gentle/hard/fast descriptions in my data set and thus it is pretty hard to control. There will be a next version, not quite sure when yet, but I already have some more diverse videos on the movements and will update the text to include those type of descriptions. Appreciate the feedback!
Excellent! good job in training all these animation ;) I hope fullnelson will also be on the training list! :)
anyhow thanks and keep up the great work!
Thank you! Full nelson would be awesome and is definitely on the list of what I am looking at doing.
Awesome lora, thank you for sharing. Looking forward to your updates
Thank you!
Good training data apparently. Hunyuan please!
I hate to disappoint but I will only be training Wan LoRAs for the foreseeable future. Nothing against Hunyuan, I just like Wan's output more so that is where my time will go.
@yekal394674355552 this is a quite good Hunyuan lora for the same subject: https://civitai.com/models/1103323/hunyuanvideo-doggystyle-facing-camera
Can anyone clarify for me, when using an image does it need to a still of a doggy style position? Or can it just be a girl on her hands and knees and the lora will add the male? 🤔
I have always done it with the male already in the picture. I’m not sure what would happen when starting with just a girl only, but If I had to guess I don’t think it would work.
lol i2v means what it states, you don't want to wait whatever amount of gen time just to see the girl turn into a dog and run out of frame
I've done it a couple of times. The trick is to describe him coming into frame in great detail. It's still kind of hit and miss but it can definitely be done.
@SLAPY7 if you have an image you like without the male in it, you could try inpainting first before i2v
Not sure about this lora, it's possible but a bit hit and miss. This img2video I did with another lora didnt have the guy in the frame for example
https://civitai.com/images/63163990
So here's a trick to help you out here. Get a picture of a girl/boy/thing and if you want another girl/boy/thing in the image to be rendered then just do the following.
1.) Open an image editor (XNVIEW is great) and open the image you want to use for i2v.
2.) Do the same with an image of the thing you want to be added to the i2v image.
3.) Crop or trace around the thing you want and copy it.
4.) Use the cropping tool or some other similar tool to paste into the image you opened in step 1. You can resize and reposition it.
5.) Save the image and then use it for i2v.
This works most of the time but does take some experimenting to get right. Hope this helps some people to get an idea of how people are getting some good i2v results.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.