If you want to use more my checkpoint online generation, please visit here.
https://tensor.art/u/762555264535746522
V-pred-04
Data balancing and adjustment have been made, and some background logic has been optimized
Recommended settings:
Steps: 30
CFG scale: 5-7
Sampler: Euler a
Positive Prompt
,masterpiece,best quality,newest,absurdres,highres,very awa, Negative Prompt
low quality,worst quality,normal quality,text,jpeg artifacts,bad anatomy,old,early,copyright name,watermark,artist name,signature,V-pred-02
The stability of the limbs, with the overall color leaning towards a warmer tone.
v-prediction versions are experimental models.
You need to use the webui that support v-prediction.
ComfyUI
reForge
Forge
AUTOMATIC1111 (dev branch)
Use Hires. fix (The situations where ADetailer is needed are not many)
Style and Character reference NOOB
All example images are generated at 1024x1360,and Hires upscale: 1.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B,Denoising strength: 0.5.
Recommended settings:
Steps: 30
CFG scale: 5-7
Sampler: Euler a
Positive Prompt
,masterpiece,best quality,newest,absurdres,highres,very awa, Negative Prompt
low quality,worst quality,normal quality,text,signature,jpeg artifacts,bad anatomy,old,early,copyright name,watermark,artist name,signature,V-pred-01
BASE:NOOB V-pred 1.0
This is a test model, attempting to use some unconventional methods to achieve merging.
The merging materials used are not purely from the v-prediction model, so there may be issues that I'm unaware of. If there are any, please leave me a message.
v-prediction versions are experimental models.
You need to use the webui that support v-prediction.
ComfyUI
reForge
Forge
AUTOMATIC1111 (dev branch)
Use Hires. fix (The situations where ADetailer is needed are not many)
Style and Character reference NOOB
All example images are generated at 1024x1360,and Hires upscale: 1.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B,Denoising strength: 0.5.
Recommended settings:
Steps: 30
CFG scale: 7
Sampler: Euler a
Positive Prompt
,masterpiece,best quality,newest,absurdres,highres,very awa, Negative Prompt
low quality,worst quality,normal quality,text,signature,jpeg artifacts,bad anatomy,old,early,copyright name,watermark,artist name,signature,V2
BASE:NOOB eps 1.1
Try to ensure image quality without using quality keywords, in order to create a model that can be easily used by beginners.
Not very necessary to use Positive and Negative quality Prompt
Use Hires. fix (The situations where ADetailer is needed are not many)
Style and Character reference NOOB
All example images are generated at 1024x1360,and Hires upscale: 1.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B,Denoising strength: 0.5.
NO Positive and Negative quality TAG
Recommended settings:
Steps: 30
CFG scale: 5.5
Sampler: Euler a
v1
Recommended settings:
Steps: 28-35
CFG scale: 5-7
Sampler: Euler a
Positive Prompt
,masterpiece,best quality,newest,Negative Prompt
low quality,worst quality,normal quality,text,signature,jpeg artifacts,bad anatomy,old,early,copyright name,watermark,artist name,signature,Description
FAQ
Comments (65)
I think the regular V8.0 is slightly poorer in composition.
Therefore, if image quality is not a concern, this model is more versatile in composition.
Sorry for the question. But ive seen this Quality prompt more than once. What is exactly "Very Awa"?. theres english concepts that escapes me, and im not sure what that specific prompt refers to ;o
This tag was provided by the NOOB model creator, and I'm not entirely sure about its specific meaning, but it can help improve image quality.
@WAI0731 oh i see, so it comes from the original model creator. Thank for answering the question so fast!
"Very Awa" = Very aesthetic , it's equivalent to Pony's score_9 as the top x% images based on an Aesthetic Scorer received the tag
@Churrus Ooooooh!!, thank you!
That is the aesthetic label obtained from the aesthetic scoring model. So far, there are only two: "very awa" and "worst aesthetic." The former refers to data in the top 5% weighted scores from waifu-scorer-v3 and waifu-scorer-v4-beta, while the latter refers to data in the bottom 5%. It is named "very awa" because its aesthetic criteria are similar to the (A)rti(Wa)ifu Diffusion model.
@0xSeiunSky Thank you!, now im cristal clear! :D
大佬能做真实亚洲的大模型吗
V-pred version doesn't seem to be as good as some of the other models. Anatomy seems worse as well as overall quality. Than again i also struggle to get it working for Noob as well. Still have no idea why.
Have you tried changing the way you prompt?
@madaraxuchiha88 I'll try some more methods, but so far my experience with V-pred isn't great
I have to admit, I'm a little bit confused by the fascination with v-pred. I have never, in any model, ever noticed what I would consider superior quality from specifically using v-pred instead of epsilon outside of placebo. Cases where a v-pred model generates superior images, I am 100% sure if it was trained on epsilon it would be just as good. With the performance hit of v-pred, I just don't know why it's getting so prevalent.
I share the same view. The results brought by v-pred are hard to determine as good or bad, but many people want me to release a version withv-pred, which is why this model exists.
The success of NovelAI3 might have led people to favor V-prediction, and indeed, V-prediction has advantages in color and instruction adherence. However, considering the inadequacy of infrastructure, the benefits of using V-prediction remain unclear
Eps is a flawed method, it can't produce completely bright or dark images and always try to bring the average brightness of images to 50%, making it add/remove/change color of elements unprompted to try to satisfy that. Vpred fixes that and has superior prompt adherence because of it.
Try to make a night or very dimly lit scene in a non-vpred model and you will see various lanterns and light sources sprinkled throughout the image. Now you know why.
@Churrus v-pred might be mathematically better, but like the others say, I've yet to see a v-pred model that outputs results that are even subjectively nicer looking than with ε-pred, let alone objectively. Higher dynamic range? Sure, and for a particular type of image (stylized high contrast 'artsy' kinds), it can look pretty decent, but the severe drop in background quality and style adherence far outweigh any benefits for general use imo. I also haven't seen any of the improvement in prompt adherence during actual use (though it's never been something I've struggled with so much on ε-pred to begin with).
No idea if these are just flaws with NoobAI specifically though rather than v-pred as a whole, but until a decent v-pred base model comes along I think I'll be happily sticking with ε.
This v-pred01 and v2.0 are totally different.
Use it and you'll see.
@WAI0731 v-pred rocks its skill issue what separates the great artists
V 比较“听话”,比如 step on a car 这个动作在E的noob或者其他模型上怎么都出不来,但是在V上很容易就出了
@brahianvalles bruh you aint a artist, just shut up
这次更新的V预测版本效果很好!感谢大佬!🙇♂️
V-pred-01 CFG Scale: 7
This is a great model, but the colors are saturated. Why?
The environment is Reforge.
If I set CFG Scale: to 5 or lower, the colors become more subdued, but some prompts are ignored.
I wonder if the number of tags or something is affecting this?
try change Scheduler?
@WAI0731 I changed the scheduler but it didn't work properly. I tried the same prompt as the sample Satono Diamond illustration, but this also caused color saturation when the CGF scale was 6 or higher.
Sample: Euler a I ran multiple schedulers and they all saturated. hmmm
@nobodyeight which you use ? ComfyUI ,Forge , AUTOMATIC1111?
@WAI0731 reForge
@nobodyeight I think it might be an issue with Reforge. It works fine when I test it on A1111 and ComfyUI. I not try reforge
@WAI0731 I found the cause.
With Kohya HRFix Integrated on, color saturation occurred when the CGF scale was 6 or higher.
This is a Reforge feature, so it's not an issue with the model.
However, it doesn't happen with other models, and this is the first time I've seen this issue.
👍
The V-pred-01 version is simply perfect.👍
is it better than other illustirous merges? And in what way
This model is very nice, but when I try to speed it up using hyper-lora, I sometimes see what looks like light blue polka dots appear on the face, legs, etc. How can I make this pattern not appear and speed it up?
大佬什么时候投lib上
等我放假了吧,libi那边确实很久没更新过了
@WAI0731 大佬,请问你是会把这个大模型投到illustrious分区还是NoobAI分区
(lib把illustrious跟NoobAI分成了两个分区,而且LORA还不通用,很烦)
@lij860471694 哈哈事情太多把这个事情忘了 我这俩天就上传 应该是去投noob吧
@WAI0731 请问大佬有没有能力跟lib官方沟通,illustrious跟NoobAI的Lora不互通真的很难受
@lij860471694 我跟libi他们不是很熟 估计没啥用
@WAI0731好的大佬,很期待这个大模型的表现
@WAI0731 哦对了大佬,把ponyXL也更新一下吧,谢谢
@lij860471694 ok
Any recommended VAE?
It has been integrated, and you can also use whichever one you prefer.
歪佬歪佬,感谢你的无敌模型!顺便有两个小问题~一个是eps的v3什么时候出呀~非常期待!还有一个就是歪佬能不能从您自己的角度和我们分享下您的noob微调与NSFW的IL微调比较的话~您各自优劣是什么呀~
你觉得的模型各自的优劣哈哈哈~
perfect model! I'm using v2.0. However, there is a problem with the coloring, for example: If I use a style lora in noobai-xl it comes with the correct colors from the artist, but wai-shuffle and wai-nsfw don't. It comes with something more artificial.
we need a NOOB eps 1.1 V3, please!
Honestly my favorite vpred model rn
Will this model be updated in the future? I like the coloring.
I think the answer is yes :D
I LOVE YOU
PLEASE HAVE AN UPDATE IN THE FUTURE, MY LORD!
GGUF version of this model, which can achieve better quality and faster calculation speed than fp8:
这个模型的GGUF版本,它能够比fp8质量更好,计算速度更快:
https://huggingface.co/btaskel/wai-shuffle-noob-vpred01-sdxl-GGUF
Thanks for this! Do you have any intention to do this with the v-pred-2.0 version as well?
@bluehands02 Yes, I will convert to the GGUF version of the model later
@bluehands02
They will be released here:
https://huggingface.co/btaskel/wai-shuffle-noob-vpred20-GGUF
@Bt_Askel Thanks!
How exactly can a quantized version achieve better quality?
@_Y_ For SDXL, FP16 is unnecessary, and Q8 is usually sufficient to achieve the same quality as FP16. For models that can run on devices with sufficient VRAM, I recommend choosing the quantization method without the K suffix, because it offers higher performance. For devices with insufficient VRAM, you can try the quantization method with the K suffix, which can achieve higher quality than the non-K method.
您的noob微调与NSFW的IL微调比较的话,有什么区别?
预测方式不一样啊 V预测颜色布局不会过度平均 生成速度也快一点
Spectacular. Keep it up.
Details
Files
Available On (16 platforms)
Same model published on other platforms. May have additional downloads or version variants.











