None of the images in the previews were up-scaled, detailed, or treated beyond adjusting the sampling steps.
This model was created with the goal of providing an unbiased, stable, flexible, and easy-to-use foundation, suitable for standalone use or with LoRAs. (The same goal I had with KNK PonyMerge)
Features
No extensive negative tags required: Works properly without the need for extensive negative prompts, but adding some (e.g., "worst quality, low quality") can help refine outputs in certain cases.
Improved limb generation: Produces better anatomy right out of the box.
Unmatched LoRA compatibility: Designed for seamless integration with a wide range of LoRAs.
Faithful native style representation: Accurately represents the native styles embedded in the model while remaining versatile for style LoRAs.
User-friendly: Simple and intuitive to use.
Natural language compatibility: Excels in understanding natural language prompts.
How to Use
Since the model is largely based on NoobAI, it is recommended to follow the same strategies suggested on its guide, with some additional considerations due to the integration of IterComp in the merge:
You may need to use "text" as a negative tag, as the model tends to interpret unfamiliar concepts as text to display in the image (this can be an intentional or unintentional effect, depending on your needs).
Use the format "artist: {artistName}" when specifying artists, as the model is highly literal and may generate characters with similar names.
Version 1
This version was built using an early iteration of KNK LumiNAI as a base, merged with IterComp, rehydrated with NoobAI (0.75), and fine-tuned with several LoRAs in negative. This process aimed to remove as much of the original style elements as possible, allowing LoRAs to display their styles more effectively and with minimal interference.
License
This model follows the NoobAI License, so please refer to the NoobAI model page for the most up-to-date licensing details.
Description
FAQ
Comments (13)
Konoko's Notes:
Thank you so much for follow my work!. If well this model was based on my model LumiNAI, I didn't put them together due they have different goals.
I removed the most of the semi-realistic style and other base styles applied to the original model, due was affecting several styles and making hard to apply other LoRa styles. I really needed a good model where tests my Illustrious LoRas like I had in Pony (KNK PonyMerge). I worked on this model for around 1.7 months and I think I'm satisfied with the results.
Maybe it could have several issues that I didn't seen on my test. I really would like know your thoughts and comments about it.
Finally, a model that can replace KNK LumiNAI! Love your work.
Is there any potential with the V-Pred models? I'd love to see your take on it
I'm curious about what could get with them, but I'm having a lot of problems just for make them run xd. I don't know what is wrong, all my gens looks really "wavy" and I cannot get descent images without an upscaling or something else (if you have a guide about it, will be really appreciated). Additionally, I need to learn other technical stuff to know how do a v-pred model fine tunning and a v-pred lora. Maybe is just I was too focused on this model and I didn't put on it.
@Konoko From my experience you can't consistently get good images from Noob vpred without refiner/hiresfix. On the bright side prompt following and general knowledge of vpred model is much better. For example some artists tags give not the usual "something a bit similar" but lora quality style resemblance. Vpred is also better in 2.5d/semirreal.
If you are ok with ComfyUI you can check my workflow to generate good images with Noob vpred https://civitai.com/models/1150811.
I actually want to test your model as a refiner to see how good it will work.
@somedoby I'll definitely try it! thanks for it. I'm just curious about something. Reading the description, it needs some extra steps to make it works. There isn't a v-pred model that works well without anything else? o.o
@Konoko Yeah, it is kinda ironic how hard it is actually to work with model called "Noob". As for other models you can check "cyberfix" models (linked in the resources on workflow page), they produce more consistent results out of the box. But they merged EPS model into the base Noob vpred.
Looks good! I settled on a particular combination of artist tags while working with "The I WonderMix", but it's never really the same when I try them on other Illustrious models. I'm glad my style seems to work here.
I am very excited to try this model out! KNK Ponymerge was one of my favorite Pony models! Love that you made a spiritual successor!
It would be nice to publish the extracted full LyCORIS. This can be used with newer versions of noob
It sounds good. It could work like a kind of stabilizer, the problem is I don't know how to do that haha ; u;. I tried time ago do something like that but the LoRa resultant was really terrible. It just wasn't capable of get any kind of similar results as the original model. That was time ago, if you know a method to do it. I would love to know it!
I am curious about the training process and method of this model, can you explain it? THX
Sure!, is a bit confusing due I was just doing tests in the first part, if you lost on something tell me to elaborate on it.
I started with an early stage of my trainned model Luminai https://civitai.com/models/997160/knk-luminai. I took Illust base and I use around 200 cosplay nswf and swf images, the dataset I used for HelioBlend (Dataset created with https://civitai.com/models/621361/knk-cieloblend-ponyv6). And a sfw/nsfw dataset of "relatively" high quality images from other Loras and projects I have. It includes several images taken from sets I already have (Like asou, diathorn, wlop, redlight, etc) and Nai Based images. My criteria was "full color, uncensored, no watermarks, no multiple views, different poses as possible, no blurry on character, etc.". The full data set was small in general, was around 800-900 images for the first fine tuning.
It was trained for around 40 epoch with adafactor 1 batch size / Lr 8e-6 unet , 8e-7 te.
The result (it was TERRIBLE XD), using ComfyUI with nodes Mecha, I took the difference to replace Illust base for Noobai7.5.
base = Noobai7.5 + (base - illust)
And then was weight merged to Nooba1.0 at 80/20
base = (base * .8) + (Noobai1.0 *.2 ).
The result had deformed "realistic" (small eyes, long bodies, 3D, big teeth o.O) tendency, but I didn't want re-train that (it was around 12 hours of training ; -;)
After that I took the Unet from my pony model https://civitai.com/models/503856/knk-softshading
base = base + ((softshading - Pony6)*.75 Unet | .1 text encoder //I don't remember exactly the percentage I used for the Unet, but was higher than 50%
This finally reduced the realism tendency, but makes it... too blurry xd. I had to re-merge (re-hydrate) Noobai
base = base .8 + Noobai1.0 .2
From this point using Mecha I took the difference between IterComp (https://civitai.com/models/840857?modelVersionId=940740) and the base SDXL, I took the 100% of the difference and was added to this base. (This increases the realism tendency but it makes sharper the anime lines).
base = base + (itercomp - sdxl)
I had to re-merge it with Noobai to decrease that XD
base = base + Noobai1.0*.35
It still keeps the CieloBlend / Cosplay bias. I trained the a lora with the cosplay dataset and I used the CieloBland loras I used for the version 2 of HelioBlend https://civitai.com/models/828469?modelVersionId=926532. I used them in negative to remove those bias. (Just for be clear, the realism bias doesn't mean it does "realistic" things, it has the tendency of create static poses, pale skins, and specially restrict simple styles with flat colors, all gens tries go 3D)
base = base - .2HelioLiora - .3cosplay
And in the final step I used Illustrious to make it more compatible with the loras but losing a bit the Noobai features.
base = base.75 + illust.25
And this was the result.
TLDR:
I'm sorry for the long text xd. But in general if you only want include Itercomp in your model or merge. Just merge the difference with SDXL and re-merge it with the original model:
model = (model + (itercomp - SDXL)).5 + model.5
It's pretty similar like are created the "CyberFix" models. If you are working on a model and you have doubts, send me a message. I'm not an expert on fine tunings but I think I can help you c:
@Konoko Thank you very much for your reply XD, very detailed!











