CivArchive
    Personal Ami style (for pony XL) - pony-v3-dora
    NSFW
    Preview 9125654
    Preview 9125862

    Regular lora version update: Lora version trained with other hyperparams, should perform better, even with innate artist OC's prompted.

    Old info About the DoRA part: you need to apply this commit with forge or update to the latest commit of a1111 to use the correct implementation of dora, otherwise it will work just like a regular locon.

    Lora for this artist is redundant, since pony knows the style without it. It was trained with two purposes, first to remove signature from gens, second to get more animefied look of gens with it. List with all tags can be found here.

    Description

    DoRA version

    FAQ

    Comments (14)

    zaadsatanApr 4, 2024· 2 reactions
    CivitAI

    Is there a page where I can read more about DoRA? It's new for me

    bakariso
    Author
    Apr 4, 2024· 1 reaction

    You can read the paper, if you really want to dive into it https://arxiv.org/pdf/2402.09353.pdf but for simplicity its just not well optimized right now to train, but theoretically should yield better quality after training this way

    zaadsatanApr 4, 2024

    @bakariso Thanks!

    bluvollApr 8, 2024

    @bakariso It does yield better reproduction of styles and some concepts do take better with DoRA, the issue is the 30 to 60% increased training time with same hardware, but trust me on this, it IS better.

    bakariso
    Author
    Apr 8, 2024· 2 reactions

    @bluvoll Its not that drastical of a difference between properly trained locon and dora in quality from what I've tested. Speed slowdown is because kohaku fucked something up, it's dropped even for locon with latest update

    bluvollApr 8, 2024

    @bakariso You´re absolutely right, but that 1 to 2% is sometimes worth dealing with w/e Kohaku did lmao

    bakariso
    Author
    Apr 8, 2024

    @bluvoll It's not 1-2%, maybe 5-10, if you willing to wait, surely dora is preferrably, not sure if dora should trains even slower, but it is, despite of the bug with locon speed

    bluvollApr 8, 2024

    @bakariso From my anecdotal testing, at best is a ~~8% but varies with dataset, small datasets(up to 70ish images) enjoyed up to 5% but most of the time wasn't that noticeable , for bigger (1.3k images or more ) I did see a noticeable improvement but the overall time used for the DoRA made me cry, plus we rarely use such big datasets for LoRA.

    bakariso
    Author
    Apr 8, 2024

    @bluvoll What are your measurements based on, just an empirical feeling or really measured statistically?

    bluvollApr 8, 2024

    @bakariso Statiscally tested using my Style datasets which range from 70images, Hiroshi, to 1.3k images Asanagi, and then blind testing with some folks regarding which one they preferred, and most of the time they picked DoRA over LoRA on large datasets, but didn't have strong preferences when the dataset was small, testing took some 2 weeks even with one A100 as training time went up 30% on that card after maxing out possible batch size, in comparison A6000 increased by some 40% and ~~50% on my 3090.

    bakariso
    Author
    Apr 9, 2024

    @bluvoll Hmm, have you tried not maxing batch size for small datasets? Its not very good idea to go for something bigger than 2 for 70-100 pics, but absolutely fine and preferrable for 1300

    bluvollApr 9, 2024

    @bakariso I tested all the way down to 1, and changes were minimal as I tried styles, not characters or concepts, so those are outside the scope of my testing, sadly.

    bakariso
    Author
    Apr 9, 2024

    @bluvoll Thats weird, because batch is one of the main settings that will affect style comprehension. Have you collected all those tests somewhere to look and compare?

    bluvollApr 9, 2024

    @bakariso Just for personal use as I use a heavily customized sd-scripts.

    LORA
    Pony

    Details

    Downloads
    752
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/4/2024
    Updated
    5/13/2026
    Deleted
    -

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.