CivArchive
    Jiraikei makeup / 地雷系メイク - v1.0
    NSFW

    Description

    This LoRA is crafted to create realistic facial makeup inspired by the Japanese "Jirai-kei" aesthetic, characterized by pale skin, bold eye makeup, and soft, melancholic tones. It's designed to work seamlessly with Hunyuan Video, allowing for smooth and consistent animations that are perfect for storytelling or video creation.

    For even greater flexibility, this LoRA can be combined with other LoRAs. I've invested a lot of time and effort into developing this model, and your support—whether through a like or a tip—would mean a lot. Thank you for your encouragement!

    Usage

    Trigger Words
    v1.0 does not require a specific trigger word. However, including keywords like Japanese or jirai-kei makeup in the prompt will help achieve optimal results.

    Recommendation
    For optimal results, I highly recommend using the v2v workflow to upscale low-resolution outputs into high-resolution animations. (I am personally using this workflow: v2v Workflow on Civitai). This approach will help enhance the quality and detail of the final result.

    Strength Adjustment
    While higher LoRA strength settings can emphasize the desired aesthetic effects, they might cause issues with video continuity. For the best balance between appearance and consistency, I recommend adjusting the strength to around 0.6 to 0.7.

    Version History

    v1.0

    • Initial release with training focused on achieving the jirai-kei aesthetic.

    • Designed for facial makeup emphasis with realistic, smooth results.

    • Limited dataset and no dedicated trigger word.

    v1.1

    • Added additional training images and introduced the trigger word "jiraikei" for easier and more consistent usage.

    • Slightly improved the model's ability to generate more realistic and detailed visuals.

    • Note: Using higher strength values may cause a loss of video continuity, resulting in abrupt scene transitions that feel like cut edits. For optimal results, it is recommended to keep the strength settings low (around 0.6 to 0.7), as with v1.0.

    • Note: There is no significant difference in the output quality between v1.0 and v1.1.

    Upcoming Versions (v1.2 or v2.0)
    For the next update, I plan to:

    • Train the model with a different dataset to achieve even better results.

    • Improve video continuity so that higher strength settings do not easily disrupt the flow of animations.

    Description

    v1.0 is an experimental version. As this is the first LoRA I have ever created, there might be areas where fine-tuning is not perfect. I plan to develop more effective and refined LoRAs in the future.

    FAQ

    Comments (13)

    yuinyan490363Jan 9, 2025· 1 reaction
    CivitAI

    What software should be used to run this

    lost_moonJan 9, 2025

    comfyui + hunyuan checkpoint

    yuinyan490363Jan 9, 2025· 3 reactions

    @lost_moon Could you please create a Pony or Illustrious Lora? The computer cannot handle the video model

    WhatTheGuyJan 9, 2025

    @yuinyan490363 you can also render just 1 frame. it's like an image generator then. Maybe that works for you

    yue_liangJan 10, 2025· 3 reactions
    CivitAI

    This LORA is amazing! Can you let me know how you made this LORA? (Link of steps you followed?)

    naiwizard
    Author
    Jan 10, 2025· 4 reactions

    Thank you for your kind words! I’m glad you liked the LoRA.

    When creating a LoRA for Hunyuan Video, it’s ideal to use a GPU with at least 48GB of VRAM if you’re training with image datasets, and 80GB if you’re using video datasets. My local machine has an RTX 4070 Ti with only 12GB of VRAM, so I used RunPod for the training process.

    I referred to RunPod’s blog and some articles available on Civitai during the training process. Let me know if you'd like more details or links to the resources I followed!

    yue_liangJan 11, 2025

    @naiwizard Thank you for your kind response! I was following "https://civitai.com/articles/9798/training-a-lora-for-hunyuan-video-on-windows". Apparently RunPod blog is using same components. I was using several images for training input, but are you using videos instead of images? If so, then are you using multiple short videos (like 2 seconds videos)? Hope you can share me some tutorial for that. Thank you again!

    naiwizard
    Author
    Jan 12, 2025· 1 reaction

    @yue_liang Apologies for missing your message earlier! For this LoRA dataset, I trained using images—about 40 in total—with the following settings: lr = 5e-5, num_repeats = 5, and epochs = 50. I didn’t do any pre-processing or additional tuning for the images. Regarding video-based training, I’m currently looking into it as well and experimenting. It seems that using short videos of around 1–2 seconds is a common approach. From what I’ve researched, when using videos, people often increase num_repeats and adjust lr if the number of video samples is limited. Hope this helps, and good luck with your experiments!

    yue_liangJan 12, 2025· 1 reaction

    @naiwizard Thank you!!

    6385751Jan 10, 2025· 4 reactions
    CivitAI

    Looking forward to your update!

    Here's a tip if you want to achieve higher clarity while using the FastVideo model + loras as using these usually leads to blurred outputs. Use the blurred output from t2v and put it into a v2v workflow with 0.5-0.7 denoise and lowering the lora value a little bit.

    11879Jan 10, 2025· 3 reactions
    CivitAI

    Great concept and generally it works well, but the only problem is that the video is not continuous for me. It cuts off in the middle and shoots another angle. Like two videos put together. Any idea how to resolve this?

    naiwizard
    Author
    Jan 10, 2025· 5 reactions

    Thank you for your comment! I honestly thought the lack of video continuity was just a coincidence and didn’t realize it might actually be caused by my LoRA until now. Your observation really helped me notice this issue— I’m currently investigating the cause and testing possible solutions. I’ll continue to work on improving this LoRA.

    11879Jan 10, 2025· 3 reactions

    @naiwizard it seems to happen when the strength is higher. If I lower the strength to 0.6, then I don't see this anymore. Maybe the training videos you used might not be split per scene? In the Hunyuan video paper, they used PySceneDetect to split videos into single shot video clips. Keep up the good work!

    LORA
    Hunyuan Video

    Details

    Downloads
    274
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/9/2025
    Updated
    5/15/2026
    Deleted
    -

    Files

    jiraikei_v1.safetensors

    Mirrors