CivArchive
    Preview 69287484
    Preview 69287471
    Preview 69291587
    Preview 69291624
    Preview 69291692
    Preview 69291709

    Vore | Headfirst

    As the name implies, this LoRA consists of a concept of a pred swallowing a prey "headfirst", i.e, the first part swallowed was their upper torso.

    The main activation tag is "vhf" for 1.5, older versions use "vore_head_first"

    Support tags: swallowing, vore, etc.

    You may also want to add tags such as "human prey"/"[character] prey", ... and so on, to focus on specific character types. Other useful tags include: "neck bulge, big belly, abdominal bulge", and others.

    Description

    FAQ

    Comments (3)

    LazmanApr 11, 2025· 17 reactions
    CivitAI

    Yes, Finally, a non-weird vore lora, Lol.. I mean, sure.. 'vore' kinda comes with 'weird' by default, but, human females swallowing people is next level weird, like large uncanny valley vibes from that trend, and feet first with the head comin outta the mouth is just.. idk.. more meme than fetish. Just my opinion though.

    In any case, fully glad for this!

    "Three artificial tags"

    In my experience, if you add tags that are unique to the model/lora, A: The obvious (probably), make sure you're training both text encoders, or at least the embeddings for them. B: add a lot of description around to tags in order to fully define their meaning, since the model has no idea beyond the likely limited images in the dataset for that particular prompt.

    Detailed prompting can be the hardest part of making a lora (unless you're manually masking characters, cuz that can be a bit time consuming). But if you can't think of the words to describe something in accurate detail, then describe it as best you can to chatGPT, while also letting chatgpt know that you're doing this to train a lora for an 'sdxl' model, and it'll usually give you something good.

    oilio2
    Author
    Apr 12, 2025

    Thank you for your feedback!

    Regading the three artificial tags, this has worked before, but as it turned out, I had an imbalanced dataset, so it pretty much always leaned towards certain aspects. The images were already tagged pretty well, so I just tweaked some of the finer details here and there.

    I've now trained a new version on a reduced subset of images and I am now just doing some more tests locally before updating, but right off the bat it works better than it did before, and I also removed the artificial tags and replaced them with equivalent tags, that also seems to make it easier.

    Hopefully it all goes well, fingers crossed!

    LazmanApr 12, 2025

    @oilio2 "The images were already tagged pretty well"

    How well though? For example, with something like foreskin, one could caption it with;

    "foreskin, extra skin at the tip of the penis"

    And just leave it at that, some may think that's enough. However, if you use captions like;

    "foreskin hanging past the tip of his penis, Natural, uncut, Foreskin forming a soft natural overhang past the tip, fully covered penis tip, long retractable foreskin, smooth seamless skin transition, tapered foreskin tip, very long foreskin, the hole on the tip of his foreskin is tiny and only opens up when urinating"

    Then that gives the model a lot more to work with, and ya end up with more a higher precision outcome.

    Speaking of precision, do you mask when creating loras? In my experience, doing this also creates better loras. Cuz it gives full focus on the target concept, and isn't wasting time and resources arbitrarily training clothing and backgrounds (though with vore concepts in particular, you wouldn't want to mask the clothing, as it will not as easily understand the concept of an entire person being swallowed).

    But, before masking on my own loras, I found that they would often produce ugly patchwork blends of clothing, or ratty looking sheets/curtains, even then sheets/curtains were in the negs. Cuz the lora has all of that extra arbitrary info in it's data.

    Also, what precision do you train with? If you have at least a 16gb Nvidia GPU, like a 4060TI, then you can get away with training/saving the lora at 32fp. Takes longer to train, but worth it for producing a better product in the end. If you've got a 4090 or higher, then fp32 should just be a given cuz the train time would likely be negligible.

    Batch size is debatable, even to those with a decent overall knowledge of it. Cuz if you train on a batch size of 1, then you get the benefit of a lora that's better at producing the concept in more limited scenarios, higher batch sizes speed up training, but give better generalization at the cost of lowered precision in reproducing the input data.

    At least that's how I understand it. Like, the way I visualise it is, like overlaying the images over one another; it gets a better idea of how subtle transitions can be made to give an alternate look while maintaining the concept, however may increase likelihood of extra limbs and such.

    Side notes, what are your thoughts on hard vore? IE: broke limbs, blood, ect?

    LORA
    Illustrious

    Details

    Downloads
    288
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/10/2025
    Updated
    5/3/2026
    Deleted
    -
    Trigger Words:
    vore_head_first

    Files

    Vore__Headfirst.safetensors

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)
    Other Platforms (TensorArt, SeaArt, etc.) (1 mirrors)

    Available On (2 platforms)

    Same model published on other platforms. May have additional downloads or version variants.