Model Introduction
Model Details
Developed by: ChenkinNoob team
Model Type: Diffusion-based text-to-image generative model
Fine-tuned from: Laxhar/noobai-XL-1.1
Sponsored by from: 极逸 SOON (soonjy.com)
Independence Statement: ChenkinNoob team is an independent team; this project is not an official release of Laxhar Lab, but is built upon their open-source base model.
Participants and Contributors
Participants
Chenkin: Civitai | Huggingface
waww: Huggingface
leafmoone: Huggingface
heathcliff: Huggingface
tairitus: Huggingface
Ryan Corwin: Huggingface
chinoll: Huggingface
spawner: Huggingface
ChenkinNoob-XL-V0.5

Long time no see, everyone! It's been a while since our last release, and today we are thrilled to bring you the brand new major version trained on top of V0.2: ChenkinNoob-XL-V0.5. This update focuses on improving overall aesthetics, reducing common AI artifacts, and enhancing practical usability for game art production and illustration workflows.
Key Updates & Features:
~12M Dataset Optimization: We removed some style-divergent images (<1M), retained our core 9M open-source anime dataset (Danbooru, with data cutoff up to Jan 2026), and added 2.17M strictly filtered open-source game concept designs and high-quality Western datasets. The total dataset is now around 12 million images.
Custom Training Architecture: To efficiently handle this massive dataset, we developed an exclusive training script from scratch. This allows us to better utilize Hierarchical Dropout and Repeat Tag Resampling, significantly improving the model's generalization.
Exclusive 8-in-1 ControlNet: Released alongside Chenkin-UniControl-XL.
Highlight -
FuseMode: Allows you to mix multiple ControlNet conditions simultaneously (e.g., Lineart + Depth + Pose) in a single pass without weight interference.Requirement: To use this in ComfyUI, you must install our exclusive plugin: ComfyUI-Advanced-ControlNet.

Recommended Settings:
Steps: 25 ~ 30 | CFG Scale: 5 ~ 6 | Sampler: Euler a
Positive Prompt:
masterpiece, best quality, newest, high resolution, aesthetic, excellent, year 2026,Negative Prompt:
nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands,
Future Roadmap: To better achieve character and style transfer, our exclusive IP-Adapter (IPA) has officially entered the training schedule! Additionally, we are actively researching entirely new model architectures and hope to share these new developments with you later this year. Stay tuned!
V0.5 Acknowledgements:
Core Ecosystem: MIAOKA, silvermoong, nian__gao233, yuno779
Tech Advisors: @LAX, @Nebulae (Laxhar Lab)
Art Advisors: MLiang, BLACKDUO, Sdwang
Cover & Promo: poi, neko, MMX, and others
Discord Beta Testers: Bluvoll, Anzhc, Drac (Special Thanks), talan, Panchovix, itterative, Ryusho, Ly, Silvelter, and others
QQ Beta & Visuals: heathcliff, boundless, 2222k, suqingwei114514, 三费武装白色人种, vv--laov, and others
Community Support: 孤辰, 昊天, 米豆粒, 乾杯君 (Snke), 砚青, 双月丸‖soutsukimaru, 大尾立人间体, 青空, 喵九,and others. A huge thank you to all the friends who helped us throughout the training process!
--
ChenkinNoob-XL-V0.2

ChenkinNoob-XL-V0.2 is now live on Civitai. This update builds on the noobai-XL-1.1 backbone while extending the Danbooru training set through 2025-11-23. Compared to V0.1, you can expect:
- Sharper character fidelity: better pose stability, facial consistency, and accessory detail thanks to refreshed captions and alignment tweaks.
- Prompt responsiveness: more reliable control over outfits, lighting, and color palettes, especially in multi-character prompts.
- Open pipeline refinements: updated default prompts/negative prompts, plus cleaner metadata to simplify LoRA or LyCORIS fine-tuning.
Recommended baseline:
CFG 5–6, 25–30 steps, Euler a, 1024×1024-class resolutions
Positive: high resolution, aesthetic, excellent, medium resolution, year 2025, newest
Negative: low resolution, e621, Furry, old
New Tag System
V0.2 introduces additional tags to better surface high-quality generations and recent-era aesthetics.
| Tag Type | Label | Description |
| ----------- | ----------------- | --------------------------------------------- |
| Resolution | **High Resolution** | Image area ≥ 2048 px |
| Resolution | **Medium Resolution** | 1024 px < image area < 2048 px |
| Resolution | **Low Resolution** | Image area ≤ 1024 px |
| Aesthetic | **Esthetic** | Top 100k most liked Danbooru posts |
| Aesthetic | **Excellent** | Top 1M most liked Danbooru posts |
| Era | **Year YYYY** | Explicit year tag (e.g., `year 2025`) |
| Era | **Range: Newest** | Aggregated label for 2022–2025 content |
| Era | **Range: Old** | Aggregated label for 2018 and earlier content |These labels appear in metadata and recommended prompts so creators can quickly filter for the visual qualities they need.
Stay tuned for V0.3 (ETA January 2026) and the forthcoming anime-focused SDXL all-in-one ControlNet tied to our 1.0 milestone. Join the conversation, share your renders, and let us know how V0.2 performs in your workflows!
V0.2 Contributors
Cover Art Providers (V0.2): poi, Sdwang, freewill, 一费白色人种, 十一, wa我我, 屋檐铃兰448, 11.7suki, 曾青 等
Discord Closed Beta Reviewers (V0.2) (listed by join order): NukeA.I, 6DammK9, Talan, Ryusho, NAMAME, Anzhc, Ly, HyperClap, itterative, Chat Errꙫr, rred, Bluvoll, Void, Mirai, 𝕮𝖔𝖈𝖔𝕶𝖔𝖐𝖔 (dvxdvxdv). Your skill, passion, and fearless feedback make you the backbone of this open-source community—thank you for every priceless note that keeps the iterations moving forward.
ChenkinNoob-XL-V0.1
New Image Generation Model
ChenkinNoob-XL-V0.1 model is built upon the Noob XL architecture by continuing its training from the noobai-XL-1.1 checkpoint, utilizing an enhanced dataset that extends Danbooru2024 with additional data from August 2024 to September 2025. It upholds the high-quality performance of the Noob XL series and delivers exceptional image generation capabilities
Contributors
Special Thanks (Technical Support): @chinoll (Huggingface | GitHub) for providing long-term technical support and engineering assistance to the ChenkinNoob team.
Technical Advisors (Laxhar Lab): Special thanks to @LAX (LAX) and @Nebulae (Nebulae) from Laxhar Lab for serving as long-term advisors and providing continuous guidance on model design and training.
Promotional Video Production: 米豆粒, 衡鲍, 娜拉普 SenSen
DeepGHS: Thanks to the deepghs team (founded by narugo1992) for open-sourcing various training sets, image processing tools, and models.
Onommai: Thanks to OnommAI for open-sourcing a powerful base model.
Community: Thanks to yiyi, 双月丸‖soutsukimaru, 天满, Seina_, 448, SenSen, 古川本铺, storyaura, M1nYue, Deta_DT, sheenderwn, MLiang, Sdwang, BLACKDUO, 杏仁豆腐, 孤辰,砚青 and others for providing cover art and promotional assets used throughout the campaign, and thanks to all group members for participating in the model's closed beta testing.
Communication
QQ Groups: : 425395454
Discord: https://discord.gg/jzwzwyCKZ4
Model License
This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license.
I. Usage Restrictions
Prohibited use for harmful, malicious, or illegal activities, including but not limited to harassment, threats, and spreading misinformation.
Prohibited generation of unethical or offensive content.
Prohibited violation of laws and regulations in the user's jurisdiction.
II. Commercial Prohibition
We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products.
III. Open Source Community
To foster a thriving open-source community,users MUST comply with the following requirements:
Open source derivative models, merged models, LoRAs, and products based on the above models.
Share work details such as synthesis formulas, prompts, and workflows.
Follow the fair-ai-public-license to ensure derivative works remain open source.
IV. Disclaimer
Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage.
Description
FAQ
Comments (33)
PAIN just when I got to sleep 0.5... xd
Have fun!
Will lora's trained on noobai and chenkin v0.1 and v0.2 work on 0.5?
Most of them are compatible.
EPS models just hurt my eyes at this point.... so out of touch.
supposedly 1.0 is still planned to be eps, after which they'd be looking at something else (RF conversion, or maybe a new model altogether)
and not sure what happened to the RF branch that was made together with cabal, 0 info since 0.3
来了来了!
当ckn0.5更新的时候,所有的人都低声赞美它的名字。。。。。
烂手现象非常严重
话说该模型是不兼容加速lora吧,我用dmd2加速lora出图质量特别差(´゚ω゚`)
v5相比v2版本手太差了,很难跑出正常的手,图像质量确实好了很多
wait, so this is NOT a rectified flow model? :O
Thank you for your hardwork!
I've now heard that the RF branch (colab with cabal) may or may not have been cut
disappointing honestly
Thank God it isn't! I hated working with RF. Such a pain in my fully fuzzy bunny butt.
@BattleRabbitAIart honestly.. I kinda agree with this, dealing with all the settings and compatibility gets annoying
@Rehvka I've been curious about this and want to get a better understanding - which compatibility settings get annoying exactly?
for me it was 1 new node in the workflow (ModelSamplingSD3) and then it just works.
@11yu I just do some gens casually here and there so I dont keep up with new stuff. I use forge NEO not comfy or the other forge fork, so until recently I didnt have access to flow models. And changing all of this just to test something that is WIP, kinda annoying.
@Rehvka RF works on Neo.
@HaloSkull thats what I said, that neo just recently got the RF update a few weeks ago
请问使用的真的是26年1月的数据吗?有的画师改名了反而没数据,在1月前我上传的画师好像也完全没效果?
新数据还没有拟合得很好,后续版本应该会改善
就我一人觉得还是0.2版本的更好吗
dont know why with v0.5 i cant generate normal good looking image compare to v0.2 but trying the comic panel or multiple views seem creative compare to v0.2
v0.5 delivers a more detailed image but on v0.2 postprocessing prompts works better and characters looks closer to the original ones, on v0.5 characters faces looks weird. This was without using LoRas, most images on this model use them + post process tools, so faces look good, maybe is just my prompts but i like v0.2 more.
Edit: For merges v0.5 is a lot better.
v2对我来说更好驾驭呢
支持一波
From a quick hands-on impression, the style is much easier to tweak now — I can easily twist it to match my own look. Anatomy is decent too. The only issue is that it's not very stable when you're relying purely on prompting, otherwise it would feel ready for everyday use already.
Back with 0.2, from what I remember... you really had to force it — and because of that forced steering, the style suffered and anatomy issues were more common.
Anyway... looking forward to all the fine-tunes, haha
不太确定是不是提示词有问题。不过对比v0.2,以danbooru标签为提示词,很多明明在2026年1月份便有大量数据的新角色似乎仍然无法取得比较好的效果。但v0.5确实本身不错
Good. Tried to merge with another model and it's super good to use!
top sdxl
The 0.5 version is good too, but I'm curious about when the next version is scheduled to be released. Could you let me know around when the next version is planned to come out~?
It is probably going to take 2-3 months. Because they took so long just to add January's Danbooru dataset. I like this checkpoint a lot raw and flexible, I know training checkpoints is not an easy task, though I was fairly disappointed when I saw that only a month worth of metadata was added in v0.5 even after skipping v0.4. Wish people will support them so they can achieve great and massive result in short time.
使用条件厳しすぎ。こんなに厳しくするなら自分1人で使えばいいじゃん。商用利用できないCHECKPOINTに何の意味があるの?
Details
Files
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.













