CivArchive
    Preview 49879463
    Preview 49879467
    Preview 49879473
    Preview 49880087
    Preview 50915201
    Preview 50915196
    Preview 51189315
    Preview 51189317
    Preview 51189316
    Preview 51189314
    Preview 51286132
    Preview 51286145
    Preview 51286147
    Preview 51286149

    Intro. 简介:

    A style LoCon trained on pony-based model images collected from Civitai site with "most collections" and "most reactions".

    这是一个训练自Civitai上点赞最多和收藏最多的pony系模型图片画风LoCon

    This lora does not intend to simulate any specific artist style or technique. It MIGHT reflects community taste and the visual attractiveness of a picture to a certain extent. Styles may change subtly depending on different prompts.

    这个lora并不意于还原某个特定的画师画风或者绘画技巧。它在某种程度上可能反应了社区审美图片的视觉吸引力。 不同的提示词下可能会有微妙的画风变化。

    Usage 使用方法:

    Versions before V2 do not have specific trigger words. Please use the quality tags provided with the corresponding model.

    For V3 and later versions, the following tags were trained:

    V2以前的版本没有特定触发词。请使用对应模型自带的质量提示词。

    对于v3及后续版本,训练了以下标签:

    positive:

    masterpiece, best quality, very aesthetic

    negative:

    worst quality, low quality, displeasing

    你可以在此基础上编辑提示词。

    Data Generation 数据版本:

    v6:

    Added over 500 new images, some of which are selected from Flux. I removed some older images that I deemed to be of lower quality.

    The total number of images in data set now exceeds 3,000, with more than 20 concepts manually enhanced/edited across 6 versions of the dataset.

    The model’s rank has been increased as well.

    新添加了500+张图片其中有一部分选自flux生成的图像。删除了一部分我认为品质不佳的旧图片。

    现在总的图片数超过了3000,6个版本的数据总共手动增强/修正了20多条概念。

    增加了模型的rank。

    v5.9:

    The model's performance is not as expected, but I believe the images in the training data set are just fine. I'm planning to adjust the tags manually and see how will the results change.

    模型效果不如预期,但是我觉得训练集的图片本身应该没什么问题。打算先手动修正tag看看效果。

    2025/1/3更新:

    Manually updated some tags, but they seem unrelated to brightness and colors. Tentatively guessing it might be related to noise offset.

    手动更新了一部分标签,但是它们和明暗色彩无甚干系。暂时猜测可能和noise offset有关。

    v5:

    The dataset has been expanded to 2,154 images, with around 1,000 Pony images as the primary training target.

    Although V-pred models can use LoRA trained on Eps-pred based models, the output quality drops significantly. This version will be trained separately on two different types of models.

    Recent versions of NoobAI exhibit noticeable artifacts, but the 'jpeg artifact' tag from Danbooru doesn’t seem to work effectively. To address this issue, about 30 typical and visually noticeable images were specifically selected as negative examples.

    An phenomenon has been observed: Pony v6 and NoobAI tend to generate a triangular lift at the roots of hairstyles with sidelocks. On Danbooru, this lift is sometimes tags as 'hair intakes' or 'curtained hair,' but Pony applies this structure to every character. This is a key reason why hairstyles generated by Pony often don't match the intended design during character training. A similar issue was observed with NoobAI. My guess is that this feature is prevalent in a dataset outside of Danbooru and was not correctly tagged.

    The images in the dataset were filtered, and about two-thirds were correctly annotated. Currently, adding 'hair intakes' to the prompt might somewhat alleviate this issue, but I haven’t found a complete fix for it yet.

    数据集扩充到2154张图。其中作为主要训练目标的pony图片约1000张。

    虽然V-pred模型也能使用基于Eps-pred技术的模型训练的lora,但是生成质量会大打折扣。这个版本将会分别在两个不同类型的模型上训练。

    noobAI近期版本有比较明显的伪影,但是danbooru上的“jpeg artifact”并没有起作用。因此专门针对这个问题选择了约30张较为典型的、肉眼可见的图片作为负面案例。

    观察到一个现象:pony v6和noobAI在生成有侧发的发型时,倾向于在发根处生成一个三角形的翘起。在danbooru里,这种翘起有时会被标注为“hair intakes”和“curtained hair”,但是pony会给每一个角色都套上这样的结构。这也是pony训练角色时,发型训练不像的一个重要原因。noob也观察到了类似的现象,我的猜测是danbooru以外的某个训练集大量存在这个特征,但没有对这个特征进行正确标注。

    对数据集里的图片进行了筛选,其中约2/3的图片进行了正确的标注。现在,在prompt里写上“hair intakes”可能可以一定程度上减轻这个现象,但是我还没有找到根治这个毛病的办法。

    v4:

    Partially optimized the dataset tags. Trained based on NoobAI Epsilon-pred v1 .

    Pony-based models have a strong tendency to generate earrings, ear piercing, and other types of accessories, sometimes messing up the ear structure of characters. I reorganized the related tags, cropped and manually edited some images in the dataset with minor structural issues, and removed pics that were too difficult to fix.

    对数据集的标注方式进行了部分优化。基于NoobAI Epsilon-pred v1训练。

    Pony系模型有很强烈的生成耳环、耳钉以其他类型的耳部饰品的倾向,有时还会破坏人物耳部的结构。对相关的标注进行了整理。剪裁、手工修改了数据集中一部分结构错误不严重的图,剔除了一些太难修改的图片。

    v3:

    Dataset extended to 1429 images, including examples with positive tags and negative tags.

    774 of the images are the most "wanted" style.

    Trained on Illustrious v0.1.

    数据集扩展到了1429张图片,包括了正反两种例子。

    其中774张是训练的目标风格。

    基于Illustrious v0.1训练。

    v2:

    Dataset extended to 374 images. Use quality tags and aesthetic tags which comes with models to control generation quality.

    训练数据集扩展到了374张。尝试使用模型自带的质量提示词来稳定生成质量。

    v1:

    Trained 224 images from Civitai, 393 images for regularization.

    Trained 2 versions based on Animagine v3.1 and Pony v6.

    训练了C站上224张图片,393张正则数据集。

    有Animagine v3.1和Pony v6两个版本。

    test ver.4:

    It is a little bit underfitted but still works. I found that those quality tags and authentic tags (best quality, masterpiece, very aesthetic, ...) Animagine v3.1 has been trained can change the art style generated by this checkpoint. Fixing it in the next test version.

    有些欠拟合但是目前是有效的。我发现Animagine v3.1自带的质量控制词和美学提示词会改变生成图片的画风,所以这个实验版本需要不填写质量词。下一版会修复。

    Description

    FAQ

    Comments (43)

    ABandoncatJan 21, 2025· 15 reactions
    CivitAI

    请问对于vpred模型,这个loadlora节点应该放在ModelSamplingDiscrete节点的前面还是后面?

    For vpred model, should LoadLora Node be put in front or behind ModelSamplingDiscrete Node?

    Dajiejiekong
    Author
    Jan 21, 2025

    我不知道......我在之前搭建的工作流里根本没有使用过这个node。

    从常识上判断,理应是放在lora的后边。因为lora记录了训练后checkpoint改变了的数据,checkpoint+lora=新的checkpoint。

    我刚才测试了一下,似乎同参数下,使用这个节点得到的图像和没有使用时得到的是一样的。我认为可以不用加。

    Dajiejiekong
    Author
    Jan 21, 2025

    https://civitai.com/posts/11920483?returnUrl=%2Fmodels%2F856285%2Fpony-peoples-works 这两张图没什么差别。你可以拖进comfyui里查看工作流。

    ABandoncatJan 24, 2025

    @Dajiejiekong 多谢,这个是NOOB Vpred 的sample里带的节点,好像顺序确实不重要

    openmn793Mar 13, 2025· 1 reaction

    旧的ComfyUI是必须要这个节点的,一开始使用NoobAI时已经测试过的。不过现在的ComfyUI的确已经能自动识别NoobAI。有没有这个节点得到的图都没有区别,也就没有顺序的问题了!

    openmn793Mar 13, 2025· 1 reaction

    补充一个,CLIP Set Last Layer节点也是不需要的!

    19818maokeguaMar 26, 2025

    @openmn793 嗯...问一下,sdxl系列不再需要设置clip_skip -2 了吗?这是新版 comfyui的特性吗?

    openmn793Mar 27, 2025· 2 reactions

    @19818maokegua 具体细节不清楚,实际使用中有无“CLIP Set Last Layer节点-2”图都是一样的,你可以测试一下。

    19818maokeguaMar 28, 2025

    @openmn793 谢谢啦!

    urbanlegendwikiFeb 8, 2025· 19 reactions
    CivitAI

    What tool did you use to train this LoRA?

    Dajiejiekong
    Author
    Feb 9, 2025· 1 reaction

    A UI in Chinese based on kohya script. https://github.com/Akegarasu/lora-scripts

    Beez111Feb 13, 2025· 33 reactions
    CivitAI

    I can't use this with IL checkpoint anymore...
    Which really sad cause this so gud... :'(

    Dajiejiekong
    Author
    Feb 15, 2025

    When I heard this news, they had already surrendered and released the model weights... But I guess that loras trained on Illu v0.1 or noobAI won't work perfectly on v1.0.

    Beez111Feb 15, 2025

    @Dajiejiekong Yah :v is normal now, lucky me

    amazingbeautyFeb 20, 2025

    tell me please what this even can do , in short simple words ?

    Beez111Feb 20, 2025· 2 reactions

    @amazingbeauty It helps the image to have a better anatomy.

    For example, the limbs will be in the right position, not blurred or smudged at the joints.

    Or the number of fingers is clearer (not sure but better when used) and the fingers look better.

    ishimarukohaku576Jan 17, 2026

    @Beez111 It is even more blurred with it honestly. Especially fingers are not great.

    Dajiejiekong
    Author
    Mar 9, 2025· 32 reactions
    CivitAI

    Civit把noobAI给单独列出来算作一个模型了,按照现在civitAI的机制,这可能会导致一部分checkpoint无法在线使用这个lora。下个版本我会为noob和illustrious分别制作lora。但是数据集准备工作还没有做完,加上训练模型和测试,大约还需要1-2周的时间。
    Civit has listed noobAI as a individual model series. According to CivitAI’s current mechanism, that might cause some checkpoints to be unable to use this LoRA online. In the next version (v7), I will create separate LoRAs for noobAI and illustrious. However, the dataset preparation is not yet complete. With model training and testing, it will take about 1–2 more weeks.

    Dajiejiekong
    Author
    Mar 20, 2025

    光是Noob就已经炼了五版了,还有些技术细节没有搞清楚,心累

    Dajiejiekong
    Author
    Mar 20, 2025

    AutoDL的A100好难抢啊

    MadImpactApr 26, 2025

    Imma try that V7 for illustrious since so many images were added and some removed to the Noob version since IL version 3 was released, love your work, ty so much, I'm actually using a Rouwei model right now tho, if you ever go that route that'd be great

    Dajiejiekong
    Author
    May 3, 2025

    @MadImpact This one? https://civitai.com/models/950531/rouwei
    According from the description text, this model seems to be a finetune program with very large scale. It's strange that the community doesn't seem very enthusiastic about discussing it. I will try to train a LoRa on this checkpoint in v8. But there were many issues during the training of v7. So v8 will take quite a while, as the dataset still needs organization before training. Since Rouwei is fine-tuned based on Illustrious—and judging by its release date, it’s likely based on Illus v0.1—you could try using the LoRA trained on Illustrious v0.1 from v7. It MIGHT work.

    MadImpactMay 8, 2025

    @Dajiejiekong been busy but yeah that's why I was so happy to see u released v7 for illus, will definitely try v7 with rouwei but the WAI one here https://civitai.com/models/1400967/wai-nsfw-branch-rouwei?modelVersionId=1583612 not base rouwei, anyhow wish u GL with V8, I can imagine the time and effort you put into organizing datasets and creating this beauty, thanks again for your hard work we love what this amazing lora can do.

    SDLoveMar 23, 2025· 17 reactions
    CivitAI

    This Lora works with Pony?

    Dajiejiekong
    Author
    Mar 23, 2025· 1 reaction

    no......

    SD_AI_2025Aug 27, 2025· 6 reactions

    Might be difficult to get through life with a quarter of a brain.

    TwostepMar 24, 2025· 46 reactions
    CivitAI

    What's the purpose of this? I don't even get it from the description. I see in the comments that it's good for night and dark shades. What is her main purpose? Plus there are so many different versions.

    Dajiejiekong
    Author
    Mar 26, 2025· 7 reactions

    ...to generate a pony-like picture, maybe?

    TwostepMar 26, 2025

    @Dajiejiekong Well, for starters, I can see the need for a NoobAi.

    I mean, what's this Lora thing gonna do? Anime style?

    HapseMar 27, 2025· 7 reactions

    read the description 🙃

    This lora does not intend to simulate any specific artist style or technique. It MIGHT reflects community taste and the visual attractiveness of a picture to a certain extent. Styles may change subtly depending on different prompts.

    TwostepMar 27, 2025· 41 reactions

    I read the description before I posted here. This description is about as informative as a hand. You can do things with your hand.

    skyraker0635May 24, 2025· 11 reactions

    It makes your work "more beautiful". It trained by "most collected works" and "most liked works", so it will be more closed to those work. Choosen by public.

    Gr1efJul 17, 2025· 9 reactions

    "Trained on pony-based model images collected from Civitai site with "most collections" and "most reactions"."

    I take that to mean it's trained on pictures people have made on this site that got the most thumbs up. Kind of a best of the best filter, and maybe you get lucky without having to ask for complicated stuff like 'pastel highlights with an f16 aperture, short focus portrait at dusk'.

    PrankstirFeb 8, 2026· 1 reaction

    It's the "Spirit of the Season" reflecting the trend and and tastes of the CIvitai Zeitgeist.

    NeoGEGEApr 17, 2025· 15 reactions
    CivitAI

    这是一个可以在光明系上使用的pony lora?

    Dajiejiekong
    Author
    Apr 17, 2025

    版本名称上标注了底模的版本号,只有V3是在光辉0.1上训练的,后边的都是在noob系上训练的

    deitychaserOct 16, 2025· 4 reactions
    CivitAI

    Nice work. Question: How are the images captioned? Did you use the captions that were used for the generation or did you tag them with wdtagger?

    Dajiejiekong
    Author
    Oct 17, 2025· 3 reactions

    I used a taggers. Since this project has been ongoing for quite a while, I’ve changed tagger versions several times, but all of them were trained on Danbooru tags. I also made adjustments to around 20 tags manually based on training needs (for example, removing “nose” and “lips,” and modifying the labeling range of “realistic”).
    Many of the tags written by users on Civitai are not standard Danbooru tags, usually like 'sexy poses' or 'beautiful woman'. In many cases, the basemodel doesn’t really understand these words, so they don’t show up visually in the generated images. It’s also possible that some of these tags have low weight and get ignored during generation. Therefore, these prompts can’t be directly used for training.

    hiyoriyamanai379Dec 14, 2025· 7 reactions
    CivitAI

    这个lora真的好厉害(我绝对不会说是瑟瑟很厉害),很好奇作者是怎么优化训练资料和标签的,因为要找到大量2.5D风格图片素材不容易呀,而且能够理解大多数输入词汇(不同人物、衣服、姿势、表情...etc),设定标签会要考虑很多吧,我问的问题可能有点不是那么专业,不过我自己有尝试过自己生成lora,我用wd14 tagger和kohya,不过效果...只能说不如预期;),人物眼睛会歪掉,或者不管打啥指令都改变不掉人物的服装,所以好奇作者大大是怎么做lora的

    Dajiejiekong
    Author
    Dec 15, 2025

    图片来源就是C站,收集来之后手工修改,用WD打标之后再看需求编辑标签。我现在也在用kohya脚本训练,这个没问题的,不过过段时间我可能会换平台(因为他们不怎么更新了,很快SDXL就要被换代了)。
    训练集的话,这个模型和你的那种lora差异会比较大,我这个从v4开始就是3000+的大数据集,最新在训的v9都7000+了,数据集里又有10多个不同功能的数据集,训练目标也比较复杂,训练策略肯定和一般的人物lora不一样。几句话也说不清的。
    你描述的问题很可能是过拟合造成的。去b站搜搜人物lora的教程呗,人物我也练得不多。

    hiyoriyamanai379Dec 16, 2025

    感谢你百忙之中抽空回我, 如果是这么多图片加上手工调整的话工程量一定会很大呀, 不过很好奇, 作者的显卡型号是哪一等级的,我的型号是笔电型Nvidia rtx5060, vram只有8GB, 整个训练70张图片步骤只有5000步就需要将近整整三天的时间才能完成, 而且每次batch size仅能处理一张图片, 好奇作者硬体规格上是怎么配置的

    Dajiejiekong
    Author
    Dec 16, 2025· 1 reaction

    @hiyoriyamanai379 我租的卡。这页上的版本用的都是4090,后面用过A100,现在在用pro 6000。别的一些任务我也用过30系和50系的卡。我其实用不完80G或者96的显存,因为我测试过,超过10的大batch size下学习的细节不太好。升级设备主要还是算力大的卡算得快一点。
    batchsize是1的话确实很慢,不算上其他优化的话,光是训练速度就比中等大小的(比如10左右)要慢一两倍。然后还有别的一些加速,xformers,SDPA,缓存latent之类的。然后有些训练参数优化好了还可以节省些训练量。
    不过笔记本的卡不建议强行炼了。这种便携设备不仅烧卡,指不定连你的主板上别的什么也给烧坏了。如果是这个体量的,不如直接C站或者吐司上用他们的在线训练服务。别人的参数是调好了的,同等效果可能你自己要摸索参数不少时间。

    Tsubaki21Feb 26, 2026· 2 reactions
    CivitAI

    The magical Lora that improves everything idk how

    Tsubaki Kunoichi 🥷