v-pred_v2.0
Carefully Curated Danbooru Dataset (up to September 2025)
Base Model: NoobAI-XL (NAI-XL) V-pred-1.0-Version
When necessary, please use Rescale CFG with values up to around 0.7 (especially for NSFW content).
Quality Tags:
very aesthetic > masterpiece > best quality > good quality > normal quality > worst quality
If you feel like it, you can support me on Ko-fi. Thank you.
Online Generator:https://www.seaart.ai/ja/models/detail/a106ce42854f7b7f65523ad10e03caa8
v-pred_v1.1
Dataset:50k AI-generated images from Aibooru and Pixiv + 10k images from Danbooru
Prompts
people, characters, copyright, style, general tags, rating, masterpiece, best quality, good quality, newest
Negative Prompts
lowres, worst quality, bad quality, bad anatomy, sketch, jpeg artifacts, signature, watermark, old, oldest
Base model:NoobAI-XL (NAI-XL) V-pred-1.0-Version + Illustrious XL 1.0
v-pred_v1.0
Base model:NoobAI-XL (NAI-XL) V-pred-1.0-Version
Use v-prediction and Zero Terminal SNR.
For information on v-prediction and how to use it in ComfyUI, refer to the NOOBAI XL Quick Guide by L_A_X.
As of November 20, checkpoints supporting v-prediction are not available for use in stable-diffusion-webui.
Outside of ComfyUI, you can use the webui's dev branch (likely requiring a .yaml file for each checkpoint) or Forge and reForge.
Additionally, make sure to select "Zero Terminal SNR" under Settings → Sampler parameters → Noise schedule for sampling.
v3.1
Base model:NoobAI-XL (NAI-XL) Epsilon-pred 1.1-Version
v-pred_v0.1
Base model:NoobAI-XL (NAI-XL) V-pred-0.6-Version
v3.0
Base model:NoobAI-XL (NAI-XL) Epsilon-pred 1.0-Version
v2.1
Base model:NoobAI-XL(noobaiXLNAIXL_epsilonPred075 x 0.7 + noobaiXLNAIXL_epsilonPred05Version x 0.3)
v2.0
Base model:NoobAI-XL (NAI-XL) Epsilon-pred 0.5-Version
v1.0
Base model:Illustrious-XL-v0.1
After training, this model merged Lucereon into the U-net.
(0,0.3,0.25,0.2,0.15,0.1,0.15,0.2,0.25,0.3,0.3,0,0.05,0.15,0.25,0.3,0.25,0.15,0.05,0)
License
Description
FAQ
Comments (15)
I'm not sure what V-prediction and zero terminal snr is, where do I find those and what are they?
I'm no expert and I don't know if I'm using it correctly, but in ComfyUI, search for the "ModelSamplingDiscrete" node (it's part of ComfyCore), then configure the sampling methodology to v_prediction, and add the node to your workflow.
@surenintendo thank you!
hello,the checkpoint vpred non work for me :(
I haven't had a lot of experience on AI image generating as a whole but I think you hit the jackpot with this one, never seen a model so accurately and consistently follows the tag I gave to it and produce good results, I still had some problem with extra fingers/toes but that could very well be an issue on my end, thank you for this model!
Can you please share your parameter settings, I'm also a newbie and this model doesn't work for me
@YNWYWYNWYWNY I mostly just followed the guide for NoobAI and made some small adjustments https://stage.civitai.com/articles/8962
please no more v-pred (this goes to all model uploaders), otherwise best model out there (not the vpred version).
vpred is a straight upgrade to eps thoughbeit
v-prediction and Zero Terminal SNR performed well in my tests over the past few days.
In particular, the light and shadow effects of Zero Terminal SNR are more natural,
I just need to retrain the corresponding LORA, and I spent a lot of time retraining (only need to modify the training parameters, no need to change the TAG and training set, just let 4090 run again)
The main problem is that the NOOB AI model of v-prediction and Zero Terminal SNR has not completed training. Because it is still being tested and does not have a complete training set, the overall performance is slightly inferior. It is not stable enough without LORA, but it is obviously better and has more potential for multi-concept restoration and light and shadow control.
v-prediction and Zero Terminal SNR are designed to solve the common defects of epsilon SDXL. Some of the more recent observations are that v-prediction works at a much lower CFG and with many fewer steps than an epsilon XL model does. much better fine details and contrast.
v-predはお勧めできない。V3.0が一番良い!!
同感
@DD_inazuma 絵柄が全く安定しなくなりましたよね
After testing, I found that using the v-pred model and training the LORA with v-prediction and Zero Terminal SNR still works well on the eps-prediction NOOB AI/Obsession model and operates smoothly.
Some of the more recent observations are that v-prediction works at a much lower CFG and with many fewer steps than an epsilon XL model does. much better fine details and contrast.
Looking forward to the improvement of the v-pred model.
any plans for obsession with noob1.1?
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


