CivArchive
    [Qwen] Rebalance v1.0 - v1.0 lora r16
    NSFW
    Preview 107261138
    Preview 107261141
    Preview 107261140
    Preview 107261139

    Example Workflow:

    https://civarchive.com/models/2065313/rebalance-v1-example-workflow

    Thanks for WANdalf helping to extract the loras for nunchaku usage.

    Model Overview

    Rebalance is a high-fidelity image generation model trained on a curated dataset comprising thousands of cosplay photographs and handpicked, high-quality real-world images. All training data was sourced exclusively from publicly accessible internet content, and the dataset explicitly excludes any NSFW material.

    The primary goal of Rebalance is to produce photorealistic outputs that overcome common AI artifacts—such as an oily, plastic, or overly flat appearance—delivering images with natural texture, depth, and visual authenticity.

    Training Strategy

    Training was conducted in multiple stages, broadly divided into two phases:

    1. Cosplay Photo Training
      Focused on refining facial expressions, pose dynamics, and overall human figure realism—particularly for female subjects.

    2. High-Quality Photograph Enhancement
      Aimed at elevating atmospheric depth, compositional balance, and aesthetic sophistication by leveraging professionally curated photographic references.

    Captioning & Metadata

    The model was trained using two complementary caption formats: plain text and structured JSON. Each data subset employed a tailored JSON schema to guide fine-grained control during generation.

    • For cosplay images, the JSON includes:

      • {

        "caption": "...",

        "image_type": "...",

        "image_style": "...",

        "lighting_environment": "...",

        "tags_list": [...],

        "brightness": number,

        "brightness_name": "...",

        "hpsv3_score": score,

        "aesthetics": "...",

        "cosplayer": "anonymous_id"

        }

    Note: Cosplayer names are anonymized (using placeholder IDs) solely to help the model associate multiple images of the same subject during training—no real identities are preserved.

    • For high-quality photographs, the JSON structure emphasizes scene composition:

      • {

        "subject": "...",

        "foreground": "...",

        "midground": "...",

        "background": "...",

        "composition": "...",

        "visual_guidance": "...",

        "color_tone": "...",

        "lighting_mood": "...",

        "caption": "..."

        }

    In addition to structured JSON, all images were also trained with plain-text captions and with randomized caption dropout (i.e., some training steps used no caption or partial metadata). This dual approach enhances both controllability and generalization.

    Inference Guidance

    • For maximum aesthetic precision and stylistic control, use the full JSON format during inference.

    • For broader generalization or simpler prompting, plain-text captions are recommended.

    Technical Details

    All training was performed using lrzjason/T2ITrainer, a customized extension of the Hugging Face Diffusers DreamBooth training script. The framework supports advanced text-to-image architectures, including Qwen and Qwen-Edit (2509).

    Previous Work

    This project builds upon several prior tools developed to enhance controllability and efficiency in diffusion-based image generation and editing:

    • ComfyUI-QwenEditUtils: A collection of utility nodes for Qwen-based image editing in ComfyUI, enabling multi-reference image conditioning, flexible resizing, and precise prompt encoding for advanced editing workflows.
      🔗 https://github.com/lrzjason/Comfyui-QwenEditUtils

    • ComfyUI-LoraUtils: A suite of nodes for advanced LoRA manipulation in ComfyUI, supporting fine-grained control over LoRA loading, layer-wise modification (via regex and index ranges), and selective application to diffusion or CLIP models.
      🔗 https://github.com/lrzjason/Comfyui-LoraUtils

    • T2ITrainer: A lightweight, Diffusers-based training framework designed for efficient LoRA (and LoKr) training across multiple architectures—including Qwen Image, Qwen Edit, Flux, SD3.5, and Kolors—with support for single-image, paired, and multi-reference training paradigms.
      🔗 https://github.com/lrzjason/T2ITrainer

    These tools collectively establish a robust ecosystem for training, editing, and deploying personalized diffusion models with high precision and flexibility.

    Contact

    Feel free to reach out via any of the following channels:

    Description

    FAQ

    Comments (27)

    1639992813Oct 23, 2025
    CivitAI

    大佬,这些lora的具体作用是什么?三个有什么区别?

    xiaozhijason
    Author
    Oct 23, 2025· 1 reaction

    lora是基于rebalance和官方模型差异提取的,三个区别是rank的区别,大的rank会比较接近rebalance

    1639992813Oct 23, 2025

    @xiaozhijason 谢谢

    kk3dmaxOct 23, 2025· 1 reaction
    CivitAI

    有考虑过 JSON key也用中文吗?

    xiaozhijason
    Author
    Oct 23, 2025· 1 reaction

    不考虑

    2TheMaxOct 24, 2025
    CivitAI
    非常棒!!!👍

    shinonomeiroOct 26, 2025
    CivitAI

    Thanks a lot for releasing this fine-tune! The sample images look truly impressive. Again, this demonstrates the enormous potential of Qwen-Image as a foundational model, although I wish the community support was as strong as Flux's. But we're slowly getting there!

    Lately I've also been looking into training my own LoRA, using HD stock photos and what I can gather from X/Twitter. However, contrasting with your approach, I am deliberately avoiding amateur cosplay photos as much as possible, as those tend to be airbrushed to hell 😂 and overloaded with filters, although the variety of poses and outfits probably make up for the loss of detail.

    Would you mind sharing your settings and more insights on the process used to caption the dataset? How many images does it contain? How much time did it take to train the full model, and on what hardware? At what cost?

    Thanks in advance!

    xiaozhijason
    Author
    Oct 29, 2025

    It trained mutiple loras with different settings. The caption was done by mutiple process including hpsv3 to score the images and filtered some low score images. Using qwen 72 as basic output and joy caption beta for the detailed caption.

    EricRollei21Oct 26, 2025
    CivitAI

    I'm also curious about the training process. Can it be as simple as making lora and merging?

    xiaozhijason
    Author
    Oct 29, 2025

    Basically yes

    traithanhnam90Oct 27, 2025· 3 reactions
    CivitAI

    hope someone will release a gguf version of this model

    HomeGodNov 3, 2025
    CivitAI

    這模型的真的好,比官方模型好,但不知道有沒有辦法讓人臉多樣化一些

    xiaozhijason
    Author
    Nov 3, 2025

    可以尝试改变json prompt里面的cosplayer名字

    HomeGodNov 3, 2025

    @xiaozhijason 是打我想要的人物名字嗎,例Yui Hatano

    xiaozhijason
    Author
    Nov 3, 2025

    @HomeGod 中文英文都可以,只要是人名

    TetsuooNov 4, 2025· 2 reactions
    CivitAI

    It looks like you're working on Rebalance 2.1 already ? Let's wait and see that

    xiaozhijason
    Author
    Nov 4, 2025· 1 reaction

    Not yet.

    isokeyNov 4, 2025
    CivitAI

    我有个问题,如果训练lora,是基于rebalance好还是基于qwen官模好?

    基于rebalance的话,需要按照文中所说的写json文件吗?还是按照传统训练只需要有同名txt就好了

    xiaozhijason
    Author
    Nov 4, 2025

    建议用官模

    isokeyNov 5, 2025

    @xiaozhijason 好的谢谢

    MouTaaiNov 4, 2025
    CivitAI

    志哥,有男性Lora吗?不如你搞一个来玩一下......(我喜欢女人的),你的审美很牛逼,男的话,肯定不会差。

    xiaozhijason
    Author
    Nov 4, 2025

    没有时间没有资源

    kandy_fifa912Nov 6, 2025
    CivitAI

    感觉大佬的工作流模板的人像模板的caption节点有问题,是不是应该换成string?

    xiaozhijason
    Author
    Nov 6, 2025

    随意,就是套个模板输出设置好的json

    gemstonebroNov 10, 2025
    CivitAI

    From very limited testing it seems quite nice! Different feel. Much different prompts compared to jib mix or official qwen checkpoint.

    yzh9376216Nov 22, 2025
    CivitAI

    经过长时间高强度的使用,我可以确定这就是最好的qwen image模型,无论是对提示词的遵循程度还是人物与照片的质感,毫无疑问都是最棒的,lora的支持也很好。

    student6688Dec 22, 2025
    CivitAI

    的确比原模型好

    Checkpoint
    Qwen

    Details

    Downloads
    358
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/23/2025
    Updated
    4/28/2026
    Deleted
    -