CivArchive
    Preview 1409374
    Preview 1409372
    Preview 1409365
    Preview 1409362
    Preview 1409363
    Preview 1409368
    Preview 1409367
    Preview 1409369
    Preview 1409364
    Preview 1409366
    Preview 1409371
    Preview 1409375
    Preview 1409370
    Preview 1409373
    Preview 1409361
    Preview 1409408
    Preview 1409409
    Preview 1409391
    Preview 1409399
    Preview 1409390

    Please check out my Ko-Fi page if you are interested in supporting me.

    LoRA commissions open! -> Fiverr

    This is a LoHa LoRA! You need the extension! Why? They do a lot better with multi-concepts.

    Trigger words for V3.08 (play with LoRa and tag weights!):

    positive: <lora:ColoredSkinv309:0.7>(purple skin, woman)

    Colors you can try are "blue skin", "green skin", "gray skin", "orange skin", "pink skin", "purple skin", "red skin", "white skin", "yellow skin", "black skin".
    Experiment with adding "very dark" before the skin color tag to get darker shades of that color.
    Also, I tested it for a little bit and it seems like it does still recognize colors not present in the dataset for example: "maroon skin" made a very nice red-brown color in a friend model.

    Follow the steps in the Model Version notes to reproduce my images.

    Sample images generated with AbyssOrangeMix3A1.

    All negative embeds should be easy to find. and Prompts should be included in the sample images! Click the little icon in the bottom right to see it.

    You might want to add some negatives to counter color bleeding if necessary, for example:
    (purple background, purple eyes, purple clothes, purple hair)

    It was trained on 200 512x512 images(20 for each color) scraped from the internet and 6 epochs using Animefull-final-pruned for the Training Model.

    Danbooru tags generated by Waifu Diffusion 1.4 Tagger and fine tuned for specific details like nipples texture, see-through, and impossible shirt or form fitting.

    Description

    Generate using a weight of 0.4-1.0 like this: <lora:ColoredSkinv308:0.75>.

    Using improved dataset and new training settings.


    How I archived my generations:

    1. txt2img: DPM++ SDE Karras with 20 steps, 512x512(can go up to 768 for wide or tall), Restore faces on (using CodeFormer weight 1.0 in Settings), Hires-fix on, to 1024x1024 (double original size) with your preferred upscaler and Denoise 0.4, CFG 7. Then send to img2img!

    2. img2img: DPM++ SED Karras with 20 steps, Restore faces on (CodeFormer weight 1), rescale to 1536x1536, CFG 7, Denoise between 0.3-0.7 (based on how much you want to improve the image), SAME PROMPT! Send to extras when happy with img2img/inpaint results.

    3. extras: Scale to x3 (final will be 4608x4608, I suggest going lower, I did x3 just for the samples here) Upscaler 1 - FatalAnime 4x, Upscaler 2 - SwinIR 4x with 0.10 visibility, GFPGAN 0.10 visibility, CodeFormer 0.10 visibility.

    If anyone is curious about my version names: v{version}.{trainingEpochs}

    FAQ