Ima Luvva is collection of loras and an embed facilitating the creation of images and videos of an AI girlfriend. She has auburn hair, 20 something, slender body, long legs, medium breasts, olive skin and large hazel eyes. When That Bitch throws you out, or leaves you for another guy ( or girl ), there's Ima. If you need a shoulder to cry on or someone to keep you warm at night, Ima is your girl. She's always ready to listen. Loves showing her sexy body in small string bikinis. Likes dancing. Enjoys an intimate private photoshoot. Hair up or hair down she's always ready for companionship.
creation process- initial dataset 27 images, final dataset 121 images, 1024x1024
created a text embed for initial character design. hair, body, skin, eyes, breasts, face etc.
used the text embed with sdxl models to create the initial character dataset.
used the initial character dataset to train a lora on base sdxl model.
used sdxl and the new sdxl lora to create better dataset.
used new dataset to train a lora on flux1.dev base model.
used flux1.dev and the new flux lora to create final dataset.
used the flux based dataset to train a lora for hunyuan video, wan2.1 video and sdxl lustify!
NOTE: added training datasets for sdxl, flux and wan lora training.
NOTE 2: for some z-image lora images, this workflow was used - https://civarchive.com/models/2325079/sdxl-to-z-image
it allows me to use sdxl models to create initial latent then sends it to z-image sampler to refine/upscale.
Description
trained on 133 flux based 1024x1024 images at 640x640. 140 epochs 18624 steps
FAQ
Comments (4)
That's such a cute LoRA! I want to make mine as smooth as your video, but it's not going very well. I'm not very familiar with V2V—are you using a special workflow?
thank you. i don't remember where i got original workflow from. but it should be embedded in the video. i did make some changes to it. it's a wan/vace v2v workflow. if you save one of the videos to your computer, drag and drop in comfyui, it should populate the workflow. let me know if you have trouble with it.
the main changes i made to standard v2v workflow were change sampling steps from 4 to 8, changed controlnet from openpose to depthanythingv2, interpolate x2 and save at 28fps.
Thank you for the detailed explanation.
I'm going to give it a try now, and if it works well, I'll post the results.