CivArchive
    LTX2.3_666asslick - v1.0
    NSFW

    So, this was my first try to create a LORA. The status is not even beta it's delta, or something below. I am a total N00b... but I am learning, So please don't expect too much.

    The LORA is usable to generate anilingus/and or rimming content,

    BUT: Only for a specific position:

    The giver has to be on the left site of the frame, only the head and upper body part should be visible. The taker must be on the right side of the frame, kneeling and bending slightly forward.

    The classical Facesitting-positions are -unfortunately- problematic. I was lucky to generate one okay'ish video.

    Please consider my attached videos, what positions are working.

    Have fun! ;-)

    Description

    V0.1, trained with this prompt:

    666asslick, two people, person on the left is licking the butthole of another person, indoor scene, soft lighting, close-up shot, slight camera movement, realistic

    FAQ

    Comments (16)

    AI_2_addictedApr 5, 2026
    CivitAI

    Great work!!

    666m4ck1
    Author
    Apr 6, 2026· 1 reaction

    :-) Thank you!

    MilitAIApr 6, 2026· 1 reaction
    CivitAI

    HOLYYYY THAAAAAANKS

    666m4ck1
    Author
    Apr 6, 2026

    ;-) You are welcome!

    sniiper2011Apr 6, 2026
    CivitAI

    When I generate videos using LORAS, the faces always come out slightly blurry, even when I use different workflows. Why is that?

    CapAndABullApr 9, 2026· 3 reactions
    CivitAI

    Works great for ITV. I don't even bother with T2V on LTX models. No reason to. Always better off just using Qwen or Chroma->LTX

    LevelAvocado5106749Apr 12, 2026· 1 reaction
    CivitAI

    You could do one of cunnilingus... there aren't any, and since you've done anal, I don't think it'll be too difficult for you. If it's even possible, of course. Thanks.

    666m4ck1
    Author
    Apr 12, 2026· 3 reactions

    Thats a good idea! I will try it.

    Ostap222Apr 13, 2026
    CivitAI

    what tool do you use for training LTX2.3 loras?

    666m4ck1
    Author
    Apr 13, 2026

    I am using the Power shell. Chat GPT was very helpfull. Maybe there are better solutions... I will find out. What are you using for training?

    Ostap222Apr 14, 2026

    hmm, do you have a link the app? As googling "Powershell train lora" gives a number of different processes using Powershell (a cross-platform task-based command-line shell).
    I'm using diffusion pipe https://github.com/tdrussell/diffusion-pipe
    but it seems doesn't support LTX2.3 for now

    666m4ck1
    Author
    Apr 14, 2026

    @Ostap222 LTX LoRA Training – Quick Guide (Console)

    1. Prepare video clips

    short clips (2–4 seconds)

    same action

    similar perspective


    Example:


    clip_001.mp4

    clip_002.mp4

    clip_003.mp4


    Folder:


    C:\Users\...\ltx_dataset_videos

    2. Generate JSON file


    Open PowerShell and run:


    cd C:\Users\...\ltx_dataset_videos


    Then:


    Get-ChildItem -Filter *.mp4 | Sort-Object Name | ForEach-Object {

    '{"file": "' + $_.Name + '", "text": "triggerword, description of action, indoor scene, soft lighting, realistic"}'

    } | Set-Content dataset.jsonl


    Afterwards, you can manually refine the dataset.jsonl.


    3. Preprocess dataset

    cd C:\Users\...\LTX-2


    Then:


    python packages/ltx-trainer/scripts/process_dataset.py C:\Users\...\ltx_dataset_videos\dataset.jsonl --resolution-buckets 768x768x17 --model-path ... --text-encoder-path ... --output-dir C:\Users\...\ltx_dataset_preprocessed


    This will generate:


    video latents

    text embeddings

    4. Adjust training config


    In the YAML file:


    preprocessed_data_root: "C:\\Users\\...\\ltx_dataset_preprocessed"

    5. Start training

    python -m accelerate.commands.launch packages/ltx-trainer/scripts/train.py configs/ltx2_av_lora.yaml


    LoRA training will now begin.


    6. Result


    After training, the LoRA can be found here:


    outputs/

    checkpoints/

    lora_weights_step_XXXX.safetensors


    You can load this file directly in ComfyUI.


    In short


    Workflow:


    Video clips

    → dataset.jsonl

    → preprocess

    → training

    → LoRA


    This is the complete console-based workflow.

    meryruizk332Apr 17, 2026

    @666m4ck1 If you don't mind, how much VRAM you currently have while training the LORAs for LTX2.3?

    666m4ck1
    Author
    Apr 18, 2026

    @meryruizk332 I'm using an RTX5090 with 32GB VRAM. The trainingprocess took almost all of it.

    meryruizk332Apr 21, 2026

    @666m4ck1 Thank you so much for the reply and information

    LORA
    LTXV 2.3

    Details

    Downloads
    1,985
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/5/2026
    Updated
    5/14/2026
    Deleted
    -
    Trigger Words:
    666asslick

    Files

    LTX2.3_666asslick.safetensors