Model Introduction
Model Details
Developed by: ChenkinNoob team
Model Type: Diffusion-based text-to-image generative model
Fine-tuned from: Laxhar/noobai-XL-1.1
Sponsored by from: 极逸 SOON (soonjy.com)
Independence Statement: ChenkinNoob team is an independent team; this project is not an official release of Laxhar Lab, but is built upon their open-source base model.
Participants and Contributors
Participants
Chenkin: Civitai | Huggingface
waww: Huggingface
leafmoone: Huggingface
heathcliff: Huggingface
tairitus: Huggingface
Ryan Corwin: Huggingface
chinoll: Huggingface
spawner: Huggingface
ChenkinNoob-XL-V0.5

Long time no see, everyone! It's been a while since our last release, and today we are thrilled to bring you the brand new major version trained on top of V0.2: ChenkinNoob-XL-V0.5. This update focuses on improving overall aesthetics, reducing common AI artifacts, and enhancing practical usability for game art production and illustration workflows.
Key Updates & Features:
~12M Dataset Optimization: We removed some style-divergent images (<1M), retained our core 9M open-source anime dataset (Danbooru, with data cutoff up to Jan 2026), and added 2.17M strictly filtered open-source game concept designs and high-quality Western datasets. The total dataset is now around 12 million images.
Custom Training Architecture: To efficiently handle this massive dataset, we developed an exclusive training script from scratch. This allows us to better utilize Hierarchical Dropout and Repeat Tag Resampling, significantly improving the model's generalization.
Exclusive 8-in-1 ControlNet: Released alongside Chenkin-UniControl-XL.
Highlight -
FuseMode: Allows you to mix multiple ControlNet conditions simultaneously (e.g., Lineart + Depth + Pose) in a single pass without weight interference.Requirement: To use this in ComfyUI, you must install our exclusive plugin: ComfyUI-Advanced-ControlNet.

Recommended Settings:
Steps: 25 ~ 30 | CFG Scale: 5 ~ 6 | Sampler: Euler a
Positive Prompt:
masterpiece, best quality, newest, high resolution, aesthetic, excellent, year 2026,Negative Prompt:
nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands,
Future Roadmap: To better achieve character and style transfer, our exclusive IP-Adapter (IPA) has officially entered the training schedule! Additionally, we are actively researching entirely new model architectures and hope to share these new developments with you later this year. Stay tuned!
V0.5 Acknowledgements:
Core Ecosystem: MIAOKA, silvermoong, nian__gao233, yuno779
Tech Advisors: @LAX, @Nebulae (Laxhar Lab)
Art Advisors: MLiang, BLACKDUO, Sdwang
Cover & Promo: poi, neko, MMX, and others
Discord Beta Testers: Bluvoll, Anzhc, Drac (Special Thanks), talan, Panchovix, itterative, Ryusho, Ly, Silvelter, and others
QQ Beta & Visuals: heathcliff, boundless, 2222k, suqingwei114514, 三费武装白色人种, vv--laov, and others
Community Support: 孤辰, 昊天, 米豆粒, 乾杯君 (Snke), 砚青, 双月丸‖soutsukimaru, 大尾立人间体, 青空, 喵九,and others. A huge thank you to all the friends who helped us throughout the training process!
--
ChenkinNoob-XL-V0.2

ChenkinNoob-XL-V0.2 is now live on Civitai. This update builds on the noobai-XL-1.1 backbone while extending the Danbooru training set through 2025-11-23. Compared to V0.1, you can expect:
- Sharper character fidelity: better pose stability, facial consistency, and accessory detail thanks to refreshed captions and alignment tweaks.
- Prompt responsiveness: more reliable control over outfits, lighting, and color palettes, especially in multi-character prompts.
- Open pipeline refinements: updated default prompts/negative prompts, plus cleaner metadata to simplify LoRA or LyCORIS fine-tuning.
Recommended baseline:
CFG 5–6, 25–30 steps, Euler a, 1024×1024-class resolutions
Positive: high resolution, aesthetic, excellent, medium resolution, year 2025, newest
Negative: low resolution, e621, Furry, old
New Tag System
V0.2 introduces additional tags to better surface high-quality generations and recent-era aesthetics.
| Tag Type | Label | Description |
| ----------- | ----------------- | --------------------------------------------- |
| Resolution | **High Resolution** | Image area ≥ 2048 px |
| Resolution | **Medium Resolution** | 1024 px < image area < 2048 px |
| Resolution | **Low Resolution** | Image area ≤ 1024 px |
| Aesthetic | **Esthetic** | Top 100k most liked Danbooru posts |
| Aesthetic | **Excellent** | Top 1M most liked Danbooru posts |
| Era | **Year YYYY** | Explicit year tag (e.g., `year 2025`) |
| Era | **Range: Newest** | Aggregated label for 2022–2025 content |
| Era | **Range: Old** | Aggregated label for 2018 and earlier content |These labels appear in metadata and recommended prompts so creators can quickly filter for the visual qualities they need.
Stay tuned for V0.3 (ETA January 2026) and the forthcoming anime-focused SDXL all-in-one ControlNet tied to our 1.0 milestone. Join the conversation, share your renders, and let us know how V0.2 performs in your workflows!
V0.2 Contributors
Cover Art Providers (V0.2): poi, Sdwang, freewill, 一费白色人种, 十一, wa我我, 屋檐铃兰448, 11.7suki, 曾青 等
Discord Closed Beta Reviewers (V0.2) (listed by join order): NukeA.I, 6DammK9, Talan, Ryusho, NAMAME, Anzhc, Ly, HyperClap, itterative, Chat Errꙫr, rred, Bluvoll, Void, Mirai, 𝕮𝖔𝖈𝖔𝕶𝖔𝖐𝖔 (dvxdvxdv). Your skill, passion, and fearless feedback make you the backbone of this open-source community—thank you for every priceless note that keeps the iterations moving forward.
ChenkinNoob-XL-V0.1
New Image Generation Model
ChenkinNoob-XL-V0.1 model is built upon the Noob XL architecture by continuing its training from the noobai-XL-1.1 checkpoint, utilizing an enhanced dataset that extends Danbooru2024 with additional data from August 2024 to September 2025. It upholds the high-quality performance of the Noob XL series and delivers exceptional image generation capabilities
Contributors
Special Thanks (Technical Support): @chinoll (Huggingface | GitHub) for providing long-term technical support and engineering assistance to the ChenkinNoob team.
Technical Advisors (Laxhar Lab): Special thanks to @LAX (LAX) and @Nebulae (Nebulae) from Laxhar Lab for serving as long-term advisors and providing continuous guidance on model design and training.
Promotional Video Production: 米豆粒, 衡鲍, 娜拉普 SenSen
DeepGHS: Thanks to the deepghs team (founded by narugo1992) for open-sourcing various training sets, image processing tools, and models.
Onommai: Thanks to OnommAI for open-sourcing a powerful base model.
Community: Thanks to yiyi, 双月丸‖soutsukimaru, 天满, Seina_, 448, SenSen, 古川本铺, storyaura, M1nYue, Deta_DT, sheenderwn, MLiang, Sdwang, BLACKDUO, 杏仁豆腐, 孤辰,砚青 and others for providing cover art and promotional assets used throughout the campaign, and thanks to all group members for participating in the model's closed beta testing.
Communication
QQ Groups: : 425395454
Discord: https://discord.gg/jzwzwyCKZ4
Model License
This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license.
I. Usage Restrictions
Prohibited use for harmful, malicious, or illegal activities, including but not limited to harassment, threats, and spreading misinformation.
Prohibited generation of unethical or offensive content.
Prohibited violation of laws and regulations in the user's jurisdiction.
II. Commercial Prohibition
We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products.
III. Open Source Community
To foster a thriving open-source community,users MUST comply with the following requirements:
Open source derivative models, merged models, LoRAs, and products based on the above models.
Share work details such as synthesis formulas, prompts, and workflows.
Follow the fair-ai-public-license to ensure derivative works remain open source.
IV. Disclaimer
Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage.
Description
ChenkinNoob-XL V0.2 is out now. We extended the Danbooru training set through Nov 23, 2025, which noticeably boosts surface texture, lighting depth, and stylistic cohesion versus V0.1. The release also debuts a richer tagging system: Esthetic/Excellent tiers for aesthetics, explicit year labels, and High/Medium/Low resolution markers, making it easier to filter renders by era and fidelity inside community workflows. Dive in, share results, and let us know how V0.2 performs
FAQ
Comments (132)
智齿主播
伟大,无需多言
0.2的肢体比0.1好了很多
nice christmas gift
太棒了!太棒了!我似乎能理解得更多!更多了!哈哈哈
这个底模效果非常棒, 色彩很美,而且完全没有过曝, 对比度也很柔和~ 真的好期待完成版呀!૮ ˶ᵔ ᵕ ᵔ˶ ა
thank~
你好,這方面我還是小白,所以想請問
v0.2新增的danbooru資料是包括:舊有藝術家的新作品、加入新藝術家的作品、新角色、新tag(概念、動作、物品、場景 etc...)
請問我的理解對嗎?
因為我想不依賴lora嘗試舊有但冷門的tag,發現模型還是無法理解,未來能否訓練新的tag?這在技術上是否可行?
最後感謝你們的團隊,期待下次更新
支持!!!
I am thinking of this too. Does the "up to November 2025" mean that we're getting almost year and half of new data, or was the training selective? From my testing, some characters who got TONS of art around late 2024 really are not recognized very well.
I tried this out for a bit, and while update to NOOB database is definitely welcome, I think the character knowledge needs more time in the oven.
For example, trying to get the model to output Burnice White (from Zenless Zone Zero) gives very poor results, even with her basic features and clothes tagged in. And she's not exactly obscure character. Mind, adding "zenless zone zero" as a tag does help, but it's still not really as accurate as one would hope for.
Still, looking forward to seeing how this develops!
i just get really bad and aesthetically unpleasing results. does this model need special prompts?
You can still use quality tags from other models as it turns out! Look for my comment in the discussion.
Hi, was the model trained with standard noise schedule, or with noise offset or multires noise?
standard noise schedule
The example images you shared look fantastic, but I'm struggle to reproduce the same style. Could you share the generation setting and prompts you used for those example images, especially example including chinese game character? (maybe arknights or wuthering waves?)
稍等,我会补充
@Chenkin Thanks a lot! I checked metadata
i have the same problem, i cannot get a good output without using large amount of artist tags. definitely not a model you can just yolo prompt with
The model card doesn't say it, BUT you can still use the quality tags! I get much better results with
These positives: masterpiece,best quality,amazing quality
These negatives: bad quality,worst quality,worst detail,sketch,censor,
Communication
QQ Groups: : 425395454
Discord: https://discord.gg/jzwzwyCKZ4
为什么我把推荐的正负面提示词加进去,按推荐设置好参数后,跑出来的图还是幼儿园涂鸦水平啊?这个模型是要一口气打上几十上百个质量词,才能出将将能看的图吗?
你好,ai生图不仅有模型和提示词,还有参数,各种节点对应的多种调控手段,如果想要生成与示范图相似质量的图片,可以进入社区深入学习这些知识
为何不在社区里找喜欢的图复制一份工作流或者提示词试试呢?
i think it will be better if you update metadata in showcase image for this model. general public will not get any idea on how to use this model if there weren't even proper reference from its own uploader.
Thank you for reminding me. The problem lies with how the image was saved. The image generation prompts will be added this weekend.
Bit of an update, after having tested some of new characters.
Out of Zenless Zone Zero characters, Ju Fufu actually works pretty well when her clothes and features are tagged in... and with the "zenless zone zero" tag also applied. Yixuan kinda works, but not as well. Yuzuha works extremely poorly, and I didn't test characters released after her.
Out of Star Rail characters, The Herta and Hyacine work well, but Cerydra has some problems coming through.
From Genshin, Mavuika and Xilonen work very well.
Out of Blue Archive characters, Eri works quite well when she is tagged well enough... but Aoba doesn't really work at all, despite being older 2025 character, and having over thousand pieces on Danbooru too.
Maybe everyone will work fine in future, but Aoba's and Burnice's case is fairly curious. Just wanted to let the team know this, in the case it's something they weren't aware of yet.
Even the model forgets Aoba exists lmao
It can gen Renge really well, better than any other model I've seen. Which is impressive with her weird tattoo arm and leg thingy and tail.
@junkdoll Renge working isn't that surprising, she already worked really well on base NOOB based models, like PersonalMerge. So her being in isn't really new data.
@Cloverful She did, but her right arm and left leg red pattern thing never turned out well for me at least. Sometimes I don't have to inpaint it at all with this model.
Gonna make a merge out of this cause I'm curious. Eventually...
Edit: It'll be my first cartoony themed one tbh.
The corresponding prompts have been added to the cover and can be viewed by clicking the bottom-right corner of the image.
For a small number of images, the original files were removed, and as a result, the associated generation prompts could not be preserved. Thank you for your understanding.
Is is better for non-character and non-artist tags or has the same issues as the original Noob?
Yes
I think this is the first model I've seen that respects weird styles like "oil painting" for anime without it generating someone painting and actually looking like a painting instead of a regular anime character standing in front of an oil painted background. I'm really looking forward to more models using this as a foundation.
Is "oil painting" a weird style?
@qek No, not really. I just find most models with a lot of anime stuff in them have a rough time doing really stylized looks like that. Replicating artists styles are usually not a problem, but 9 times out of 10, getting one to do anything other than "watercolor" decently seems to be a tough ask. Lol.
very interesting, and yes, this seems to be the only illustrious/noobai-family model where "oil painting \(medium\)" tag actually produces something that looks like an oil painting.
Wdym? What's about merges and other trained models
@qek of all the illustrious/nai models i've tried so far, watercolor (medium) tag seemed to usually produce something that could be called a watercolor effect if you squinted a lot, but oil painting tags either generated an actual painting in the image (lol) or had no visible effect.
this model however seems to have included actual traditional artwork in the dataset that was properly tagged.
after playing around on itself and a few merges, I can say it is a great all-around model! Here's a few tips for whoever wants to play around because it's harder to use in my opinion. This model is accurate but also extremely sensitive to the order of the prompt, 2 words apart can have completely different results; so be careful how you want to describe your image. It also seem to be less collation learning and more independent tag learning; between 2 complementing tags, it'll usually have more weight on one or the other.
Would it be possible to add E621 data to the model in the next update?
Maybe keep it a separate version? Putting so much data in a model would hinder the depth the model is going to go for each use making it a viable option but master of none
Wait, it doesn't have E621 data? I thought it built upon NAI, which already has it?
@m4rbleye I believe that data is out of date... I could be wrong tho
Farting on prey all by yourself, handsome?
@m4rbleye this NoobAI version didn't include e621 dataset, pure trained on danbooru dataset.
@springmushroom_86 what are you on about, its being finetuned on top of eps 1.1 noob so it has e621 data from that model it just being updated with newer danbooru dataset and trained more overral
@makotoyuki no, Chenkin stated that their model trained on full danbooru dataset, nowhere it mentioned e621.
On Chenkin NoobAI 0.1 (from huggingface):
"This is an image generation model based on training from noobai-XL-1.1. It utilizes **the latest full Danbooru-based datasets** for training, with native tags caption."
If they trained e621, they will have to mention it on their model page too, like the original NoobAI XL EPS 1.1.
This model was finetuned with only danbooru, on top of Noobai, so the e621 artists, concepts and characters from Noobai got a bit weaker. They will get weaker and weaker with each extra danbooru-only training.
The e621 artists I use on Noobai already barely have an effect on Chenkin 0.2.
I'm getting quite some artifacts around eyes and mouth, even when using upscaling, recommended settings and positives/negatives...
Does anybody have tips on how to fix this?
The model seems very promising though!
Quick question: The whole point of NoobAI is that it doesn't rely heavily on LoRAs, right? So why does everyone seem to use them anyway? Even for me, the results are pretty bad when I don't use a LoRA. I just don't get it
NAI has a vast knowledge of characters, artist tags and styles from Danbooru. If you prompt for them and it has enough representation, it can give you good results by prompting alone.
Any plans for V-Pred version?
The model looks amazing and the images are coherent, but the hands… dear God. Send help.
There is a stabilizer lora for v0.2. It really helps.
This is normal because it's just base model only, there's a stabilizer LoRA for Chenkin Noob.
@springmushroom_86 Thanks, I'll try that, the model promises!
@springmushroom_86 What keywords should I search for to find this stabilizer Lora
@Aetherlyn this one: https://civitai.com/models/971952/stabilizer-ilnaick
When will the next version be updated?
February
Do tags from IL0.1 and NAI Epsilon 1.1 work on this model?
yes.
The model is pretty nice and love how the images look but I have a question, does this version include most danbooru artists? I'm noticing that certain artists don't seem to work and the model defaults to the generic AI slop style, and I don't mean artists with less than 30 works but artists with up to 100 or 200 works such as Terasu mc
Is there a recommended VAE? I normally use SDXL_VAE or SDXL20VAE20Only%20B1 but I feel my images are little more washed out. Otherwise amazing aesthetic quality and styles, just noticing some desaturation on images
Try this one XL_VAE_C - c9.1 or this XL_VAE_C - G9
ChenkinNoob-XL-V0.3 (ETA: January 2026)
Focused on further stability, inference efficiency, and better LoRA compatibility.
Rolling preview builds will be shared with the Discord closed-beta group ahead of public release. ,Hugging Face said 0.3 update in January but it's still not here???????????? When is it actually dropping??
我任命你去干掉NAI 让本地模型再次伟大
Since we still need to make significant adjustments to the training scripts, the release of ChenkinNoob-XL-V0.3 has been postponed to February.
However, we also have some good news: the Rectified Flow (RF) version based on ChenkinNoob-XL-V0.2 is expected to be released soon.
Is it okay to train loras on this or we need to use the base noobai-XL-1.1 and then use this one for inference? @Chenkin
train loras is ok
noobt团队什么时候才意识到v预测才是版本答案
actually RF is the meta now, not eps or v pred
and they did just release cheknin RF 0.2, I would say it's pretty good
new rf:https://huggingface.co/ChenkinRF/ChenkinNoob-XL-v0.2-Rectified-Flow
RF技术还没有成熟 v预测现在证实了色彩色域光影都更好 它只是需要更新的训练集
超级好用,期待新版本,EPS和V预测各有适用场景,这个版本的效果已经很惊艳了,期待团队继续按自己的专业节奏推进,不要被个别噪音干扰❤️
ChenkinNoob-XL-v0.2 Rectified Flow is officially released! LINK:https://civitai.com/models/2363696
A Rectified Flow model based on ChenkinNoob-XL-V0.2, released by ChenkinRF Lab.
What is Rectified Flow? A sampling method that "straightens" the diffusion path for faster convergence and better image quality.
Key advantages: Vivid colors with no more greyness, better lighting in dark and high-contrast scenes, stable across CFG 3-6, fewer steps needed — just 20-28.
Recommended settings: Sampler Euler / DPM++ SDE, CFG 3-6, Steps 20-28, Shift 3-8
are there any huggingface link? and the model size of huggingface is different for the previous one:
https://huggingface.co/ChenkinNoob/ChenkinNoob-XL-V0.2
@krigeta https://huggingface.co/ChenkinRF/ChenkinNoob-XL-v0.2-Rectified-Flow
Great work! Thanks you very much.
I saw this in the Chenkin's profile: "【Current Work】 (1) NOOB2 Series - Multimodal model development (2) Next-Gen Text-to-Image Models - Community fine-tuning and full-parameter training (e.g., z-image)"
Does that mean it will be there a spiritual successor of NoobAI in Z-Image?
Noob是不是指生成的图像和新手画师一样😂这个分支总是这样
It seems a bit tricky to draw a PURE white background on WebUI.
有一定门槛的惊艳模型!
说真的以一个普通用户的角度来看贵组以后贴的例图最好还是以展示下限为主,特别是放在发布页的应该优先把参数标全,而不是一味追求例图本身的观感。那些在内部遴选活动中被激发出胜负欲而卷出来的图确实是一等一的好看,但某几个作者的投稿一看就是用很复杂的私有洗图workflow做了不少自动后处理工作才搞出来的佳作,这还怎么让一般路人体会到这个底模本身具有的优越性和普适性?
本来NAI的门槛相对于其它的融合Illustrious底模就已经不低了,在复现例图这一块硬要再设一层门槛我个人觉得属实是没必要。
什么参数去玩阿,我一点也搞不明白
It should be simple prompts with famous characters and lastest characters.
善用艺术家标签,用webui也能出足够漂亮的图
@FarewellYhl 但是我用别的模型也可以有类似的效果
@FarewellYhl 依旧善用艺术家标签这一块。作为发布的例图不展示模型不加特殊prompt的的普适性捂着各自的闭源画师串当宝贝,靠什么展示你底模的优越性?直出的门槛比某些人嘴里的“shitmixes”还高你这底模noob在哪我请问了。
@Owlfelino 如果不加大量风格提示词或者anchor艺术家标签的话,你简单加个anime_screenshot抽100次出来也是100个完全不同的风格,这个模型variant太大所以根本展示不出你想要的普适性啊
Thank you for the suggestion. In the next version, we will adjust the ranking priority to give higher weight to covers that include generation metadata and are produced directly by the base model without using LoRA, so they are more likely to appear near the top.
我完全同意,示例图片里真的应该把提示词的元数据也起放出来....这样大家就更容易理解是怎么生成出来的了 ( ̄▽ ̄)b
我跑出来的图全是答辩,太难了
你可以尝试使用cknb_stabilizer提高图像质量。此外,参考别人生图使用的lora也是一个好办法
直接加载几个喜欢的lora,用脚打prompt都能让自己满意。
Need new version!!
not good
二月最后一天了,今天会有0.3吗(咕咕咕O_o
Where's the 0.3 version? People were saying it was coming soon…
I don't care about ChenkinNoob-XL-v0.2 Rectified-Flow or any of that. Just release 0.3 already.
Fuck off, let them cook.
Well, their HF says "ChenkinNoob-XL-V0.3~1.0 (Planned)" and v0.3 version is already on their private repo, so I think they just want to release later versions if not straight up 1.0. They also changed the plan to "before the end of April".
For the roadmap after v0.2, see this article: https://civitai.com/articles/26762
Please crop out artist watermarks and use clean dataset for training, it is already a good checkpoint as I have already done lots of local generations, but it is also very clear that artist watermarks have not been taken care of. I mean it will totally defeat the purpose of adding updated danbooru and e621 if artist watermarks aren't taken care of because every other finetuned merge out there is already updated with new tags and characters but not artist's tags.
We’d like to handle watermarks in the images and train on cleaner data, but our dataset is huge (tens of millions of samples), so doing it across the board is very hard. We’re not giving up—we’ll keep looking for practical approaches in future iterations. Thanks for the feedback.
Good point
Other Noobai and ILXL models cannot make some poses and situations, but Chenkin can understand much more Danbooru tags.
Week point
Even i use same artist prompt and style Lora, style changes everytime when i generate images.
Hand and background quality is not great.
True, it's difficult to control
it lack artist style and more focus on stupid lighting, if you trained model on Danbooru dataset so it need to focus more on artist style and artstyle
this model also give unnecessary light when you use artist style lora and the image get messed up totally
@itachiii Most of artist tags work well and make good results. I think that you'd better try another artist. I don't have any lighting problem.
关注好久了,终于有动静了吗,必须支持
this model totally downgraded too much disappointment
this model lacking to follow artist tag & model only focus on lighting not at all
this model need to focus more on artist style not on lighting
如果非要说的话,我感觉这个模型用艺术家标签效果会更好,点,不过有时候人体结构还是有点不稳定......你用的那些标签是在这个模型训练数据截止时间之前就已经有的吗?
its because its fucking 0.2 version its raw as hell in niche stuff
这个模型整体来说真的很棒......(≧▽≦) 我目前唯一遇到的麻烦就是有时候人体结构和手会有点不太稳定,比较随机 (´・ω・`)
不过随着模型继续训练,这些问题应该会慢慢变好吧!! 希望很快就能更新呀!!! (ノ◕ヮ◕)ノ*:・゚✧
Just gave ChenkinNoob 0.2 a spin, and honestly, the raw recognition is insane. I’ve tried so many tags that were basically "invisible" to regular SDXL or Illustrious, but this model actually reacts to them.
Even though it’s still an early build—meaning the accuracy isn't quite there yet and stability definitely needs more polish—it’s a massive leap. It feels like a "zero to one" moment for some of these niche styles. Seeing it pick up on tags that every other model completely ignored is a huge deal. Seriously huge potential here once it’s matured.
True and false at the same time. 0.1 is still better overall, but this is typical - about two-thirds of all versions of all models are worse than some of their previous versions, but after two or three versions, there is a noticeable improvement, so it is worth letting it go through another one or two iterations before drawing conclusions. As for better understanding of the prompt and stability, good merges will always win here, but the popular ones are not very good and are treading water.
Testing has me real excited. While it is a little iffy when it comes to style consistency, it looks really good! Great job on the model.
What kind of prediction model does this use? (e/v-pred). I am always unsure with some NAI models.
Chenkin NoobAI is epsilon model.
@springmushroom_86 Thanks!
为什么我用webui出的图很卡通啊,我都按照介绍配置了
When is the next version coming out? I think you mentioned it would be released in April.
Thank you for your patience — Chenkin v0.5 will be officially released on April 10.
In addition, a companion model is also currently in preparation.
Details
Files
chenkinNoobXLCKXL_v02.safetensors
Mirrors
chenkinNoobXLCKXL_v02.safetensors
chenkinNoobXLCKXL_v02.safetensors
chenkinNoobXLCKXL_v02.safetensors
ChenkinNoob-XL-V0.2.safetensors
chenkinNoobXLCKXL_v02.safetensors
chenkinNoobXLCKXL_v02.safetensors
chenkinNoobXLCKXL_v02.safetensors
ChenkinNoob-XL-V0.2.safetensors
ChenkinNoob-XL-V0.2.safetensors
chenkinNoobXLCKXL_v02.safetensors
ChenkinNoob-XL-V0.2.safetensors
chenkinNoobXLCKXL_v02.safetensors
chenkinNoobXLCKXL_v02.safetensors
8_chenkinNoobXLCKXL_v02.safetensors
Available On (3 platforms)
Same model published on other platforms. May have additional downloads or version variants.



















