CivArchive
    Sarah Petersons lesbian tribbing Scissoring sitting Xlrd - v1.0 SDXL XLrd
    NSFW
    Preview 90041222

    Generation Guide

    Model Information

    • Model Name: {model_name} (replace with the actual filename you downloaded, e.g., gngsfimZIB.safetensors)

    • Trigger Word: {trigger_word}

    Resolution

    • 2:3 ratio: 821×1232 (portrait)

    • 3:2 ratio: 1232×821 (landscape)

    • Square: 1:1

    • Note: You can vary these resolutions with limited success

    • FT15 models: Lower max resolution at 512×768

    Generation Parameters

    • Sampler: Euler (typically)

    • CFG Scale:

      • Standard models: 3-7

      • Turbo models: 1

    • Steps:

      • Standard models: 20-50

      • Turbo models: 9

    • LoRA Strength: 0.6-1.0

      • If images look "cooked" or overprocessed, lower the strength

    Model Series Identifiers

    • FT15 - Stable Diffusion 1.5 (max resolution: 512×768)

    • XLrd - SDXL Run Diffusion X based

    • CHHD - Chroma models

    • ZIMG - Z-Image Turbo

    • ZIB - Z-Image Base

    • FKFB - Flux Klein 4B

    • QWN - Qwen

    Note: LoRA files are large and can be resized if needed

    Current Recommendation (January 2026): Use ZIB/ZIT or Chroma models for best results.

    Dataset Type Indicators

    • mx - Vastly larger datasets with less consistency, typically trained at lower learning rates for longer durations

    • lncc - Smaller, more specific aesthetic-focused datasets

    Training Data Scale: Datasets vary from 20-30 images to over 1,000,000 images. The median dataset size is closer to 10,000 images.

    Training Techniques: Models starting at SDXL use mixed resolution training, multi-subject crop, and flips for improved generalization.

    Using the Wildcard Prompt Template

    The piped string format below is designed for ImpactPack Wildcard Processor or Automatic1111 Dynamic Prompts. Copy and paste it into either extension to generate a new randomized prompt each time, built on the distribution of the training dataset.

    Prompt Format

    <lora:{model_name}:{0.6|0.7|0.8|0.9|1}> {trigger_word}, {wildcard_tags}

    Example:

    <lora:gngsfimZIB:{0.6|0.7|0.8|0.9|1}> example_triggerword, {additional|tags|here}

    Understanding the Wildcard Tags

    • More pipes (|) in a tag group = rarer tags in the training data

    • Fewer pipes or repeated options = more common tags with better model performance

    • More examples in the training data mean the model is better at that particular task or concept

    Manual Usage (without wildcards)

    If you're not using dynamic prompts:

    1. Load the LoRA manually in your interface

    2. Start with the trigger word {trigger_word} at the beginning of your prompt

    3. Add additional tags after the trigger word to vary the composition

    4. Tags that appear more frequently in the wildcard examples will produce more consistent results

    Tips

    • Always start with the trigger word (the first tag) for best results

    • Check sample images for embedded generation parameters

    • Add additional tags to vary composition and style

    • Experiment with LoRA strength if results don't match expectations

    • Tags with more training examples will be more reliable and consistent

    • Reference the sample images on this page for working parameter combinations


    FAQ: Dataset Filename & Trigger Word Conventions

    What problem does this filename format solve?

    The filename is designed to avoid collisions with generic or common names while also serving as a programmatic signal. It encodes both the trigger word and the dataset type, making it easy for scripts and training pipelines to identify and handle the dataset correctly.

    Why not use a generic filename?

    Generic filenames tend to overlap across projects and environments. This format ensures:

    • Uniqueness across datasets

    • Clear intent when parsed programmatically

    • No ambiguity about dataset content or usage

    What do the suffix codes mean?

    The suffix in the filename specifies:

    • The resolution of the dataset

    • The model architecture tier it is intended for

    This makes it immediately clear what kind of model configuration the dataset targets and helps avoid compatibility issues.

    What does "mx" stand for?

    mx means mix. It indicates that the dataset is diverse and vastly larger (potentially hundreds of thousands to over a million images), though less consistent than focused datasets. These models are typically trained at lower learning rates for longer durations to accommodate the dataset diversity.

    What does "lncc" stand for?

    lncc indicates smaller, more specific datasets focused on a particular aesthetic. These are more consistent but cover a narrower range of content.

    How are trigger words determined?

    Trigger words are embedded in the dataset and filename structure. They function as activation tokens that help the model recognize and generate content consistent with the training data. Always use the specified trigger word at the start of your prompt for best results.

    How large are the training datasets?

    Training datasets vary significantly:

    • Minimum: 20-30 images

    • Maximum: Over 1,000,000 images

    • Median: Approximately 10,000 images

    Larger datasets (mx) enable broader capabilities but may be less consistent. Smaller datasets (lncc) are more focused and aesthetically coherent.


    For best results, always check the sample images on this model page—generation parameters are embedded in the metadata.

    v1.0 SDXL XLrd - check back for updates, compare model hash and last scan time.

    1216*832

    832*1216

    square

    XLrd

    XL rundiffusion

    <lora:lsbtrbmxXLrd:{0.6|0.7|0.8|0.9|1}> {lsbtrbmxxlrd, }{realistic, }{multiple girls, }{yuri, }{kiss, }{2girls, }{tribadism, }{completely nude, }{french kiss, |}{sitting, ||}{feet, |||}{tongue, |||}{lying, ||||}{toes, ||||}{open mouth, ||||}{girl on top, ||||}{on back, ||||}{arm support, ||||}{ass, |||||}{couple, |||||}{female pubic hair, |||||}{navel, |||||}{dark-skinned female, |||||}


    Description

    FAQ

    Comments (11)

    pixel8erAug 3, 2025
    CivitAI

    Every single one of your LoRAs has a bunch of garbage data in it. Please, please, PLEASE take the extra 30 seconds to look at what you're posting.
    These are not usable!

    sarahpeterson
    Author
    Aug 4, 2025

    read the description and guide... this is an early release/placeholder, check back in a month or so. Try the older XLrd models, there are thousands of examples

    pixel8erAug 4, 2025

    but |||| why|||do||||all|||||| of|| them|||| |look |||| like |||||||||||||| this

    sarahpeterson
    Author
    Aug 4, 2025

    pixel8er more pipes, rarer training term, try dynamic prompts automatic1111

    it parses them

    rando2048Aug 15, 2025

    pixel8er The {|} is a kind of an inline wildcard. The trigger provided is actually a way to hit generate forever, let it run for a bit to see what all the lora can do.

    {this|orthis} in A111, maybe comfyui, I'm new to it tho, not sure. {this|} means 50% chance of it sending nothing to the prompt, and 50% "this" to prompt. {open mouth, ||||} gives you a 1/5 chance of "open mouth" getting sent in the prompt. You can also do this with the weight of the lora, which you can see too. It's not just for dual values either, you could do {green|blue|brown} eyes. It will randomly pick one.

    I haven't tried the lora yet, so can't comment on that, but saw your comment and understood what you were asking.

    pixel8erAug 15, 2025

    rando2048 yep, that exists in some Comfy parsers too - but those are an implementation choice, not trigger words. The trigger words are only the ones to the left of the |s.
    This is just added noise that makes these harder to use.

    Grim0rNov 14, 2025

    @rando2048 The issue is that every single prompt has the brackets around them, even the first bunch that don't even have any line breaks making it pointless to have the brackets there. Also to lower the chance of getting a prompt, it would be better to do {0.1::this| } rather than {this| | | | | | | | | } as it's much more user friendly and you can be more specific with the the ratio

    sarahpeterson
    Author
    Nov 14, 2025

    @Grim0r read the paper I published on it. The pipes are a function of the actual training data distribution. The 3 most popular GUI based image generators automatically parse them. The lora are for poses, so the distribution over the differences in the poses is retained too. You can add the lora with no trigger or prompt keywords and it will basically work too.

    pixel8erNov 14, 2025

    @sarahpeterson I can appreciate what you are trying to accomplish but it is truly just noise you are adding to your civit uploads

    Grim0rNov 15, 2025· 1 reaction

    @sarahpeterson The only functions of the pipe in base A1111 is to combine multiple elements into one, such as apple|lemon will create an image of something that looks half way between an apple and a lemon Alternatively it can be used to generate multiple images using the initial prompt, but then adds whatever prompts comes after the pipes in successive images generations . With dynamic prompts installed, and within the curly brackets it just acts to separate different options for it to pick between and having empty gaps between the pipes as you have, just makes the prompt at the start less likely to be selected, thats all. The pipes are not some function of the training data distribution, whatever thats meant to mean, and again {0.1::this| } is identical to {this| | | | | | | | | | }. The only thing both of those do is give "this" a 1 in 11 chance of being picked. And yes, i've read your article (which btw is not a published paper, just to be clear)

    To quote the information I was able to find about it:

    Regularization of Entropy

    While the pipe symbol is used for creating variations in prompts, it does not directly relate to the concept of regularizing entropy in the context of machine learning or image generation. Regularization techniques in machine learning typically aim to prevent overfitting and improve model generalization, but the specific use of pipe symbols in prompts is more about enhancing creative output rather than controlling entropy.

    Conclusion

    In summary, Stable Diffusion WebUI uses pipe symbols to facilitate the generation of varied image outputs from a single prompt. However, this feature does not serve the purpose of regularizing entropy in the traditional sense used in machine learning.

    rando2048Aug 15, 2025
    CivitAI

    It would be useful to put several images up with the various things the lora can do. Thanks for your contribution regardless.

    LORA
    SDXL 1.0

    Details

    Downloads
    389
    Platform
    CivitAI
    Platform Status
    Available
    Created
    7/21/2025
    Updated
    5/13/2026
    Deleted
    -
    Trigger Words:
    <lora:lsbtrbmxXLrd:{0.6|0.7|0.8|0.9|1}> {lsbtrbmxxlrd, }{realistic, }{multiple girls, }{yuri, }{kiss, }{2girls, }{tribadism, }{completely nude, }{french kiss, |}{sitting, ||}{feet, |||}{tongue, |||}{lying, ||||}{toes, ||||}{open mouth, ||||}{girl on top, ||||}{on back, ||||}{arm support, ||||}{ass, |||||}{couple, |||||}{female pubic hair, |||||}{navel, |||||}{dark-skinned female, |||||}

    Files

    lsbtrbmxXLrd.safetensors

    Mirrors

    CivitAI (1 mirrors)

    lsbtrbmxXLrd.safetensors

    Mirrors

    CivitAI (1 mirrors)

    lsbtrbmxXLrd.safetensors

    Mirrors

    CivitAI (1 mirrors)