v-pred_v2.0
Carefully Curated Danbooru Dataset (up to September 2025)
Base Model: NoobAI-XL (NAI-XL) V-pred-1.0-Version
When necessary, please use Rescale CFG with values up to around 0.7 (especially for NSFW content).
Quality Tags:
very aesthetic > masterpiece > best quality > good quality > normal quality > worst quality
If you feel like it, you can support me on Ko-fi. Thank you.
Online Generator:https://www.seaart.ai/ja/models/detail/a106ce42854f7b7f65523ad10e03caa8
v-pred_v1.1
Dataset:50k AI-generated images from Aibooru and Pixiv + 10k images from Danbooru
Prompts
people, characters, copyright, style, general tags, rating, masterpiece, best quality, good quality, newest
Negative Prompts
lowres, worst quality, bad quality, bad anatomy, sketch, jpeg artifacts, signature, watermark, old, oldest
Base model:NoobAI-XL (NAI-XL) V-pred-1.0-Version + Illustrious XL 1.0
v-pred_v1.0
Base model:NoobAI-XL (NAI-XL) V-pred-1.0-Version
Use v-prediction and Zero Terminal SNR.
For information on v-prediction and how to use it in ComfyUI, refer to the NOOBAI XL Quick Guide by L_A_X.
As of November 20, checkpoints supporting v-prediction are not available for use in stable-diffusion-webui.
Outside of ComfyUI, you can use the webui's dev branch (likely requiring a .yaml file for each checkpoint) or Forge and reForge.
Additionally, make sure to select "Zero Terminal SNR" under Settings → Sampler parameters → Noise schedule for sampling.
v3.1
Base model:NoobAI-XL (NAI-XL) Epsilon-pred 1.1-Version
v-pred_v0.1
Base model:NoobAI-XL (NAI-XL) V-pred-0.6-Version
v3.0
Base model:NoobAI-XL (NAI-XL) Epsilon-pred 1.0-Version
v2.1
Base model:NoobAI-XL(noobaiXLNAIXL_epsilonPred075 x 0.7 + noobaiXLNAIXL_epsilonPred05Version x 0.3)
v2.0
Base model:NoobAI-XL (NAI-XL) Epsilon-pred 0.5-Version
v1.0
Base model:Illustrious-XL-v0.1
After training, this model merged Lucereon into the U-net.
(0,0.3,0.25,0.2,0.15,0.1,0.15,0.2,0.25,0.3,0.3,0,0.05,0.15,0.25,0.3,0.25,0.15,0.05,0)
License
Description
FAQ
Comments (48)
坦白說,我還是分不清v-pred和eps的差別
Tell the truth I can`t distinguish between v-pred and eps
一般说来,v预测模型相对esp模型可以表现更加丰富的色彩,同时对应提示词更为敏感
@zeseren 換句話說v會比較好?
@zczcg 理论上是这样的,不过v模型对cfg很敏感,要用好需要花点精力调参
@zeseren 我是知道他需要調配背景
i also wanna know. Someone translate plz
@kiragira24 Zeseren means the v-pred can provide vivid colors and sensitive to giving prompts.But the weakness is lacking of background details for giving less prompts
@zczcg Thz
so excited to see this model got a v-pred version. it works wonderfully, i have no idea how you got it to follow prompts so well but this is one of the best illustrious mix out there <3
not work for me :( total black image
me too how to get this trubole
Same for me when using A1111, but works well when using Forge.
@Tag50 It seems like an incompatibility issue, maybe?
@ApexThunder_Ai You can check in description of other checkpoint, how to handle v-pred models: https://civitai.com/models/833294?modelVersionId=1190596
what cfg rescale value did you use?
This model is perfect ♥
any chance this will be available for use with the on site generator?
This is the best Illustrious model by far, I got to test it briefly, can you bring it back to on-site? 😭
Tried the v-pred version and prompt adherence is great. Like somebody else pointed out in the comments, it's not great with scenery though.
I haven't tried comfy or forge because I like A1111. The V-Pred version won't work on A1111? Or is there a way to make it work on that?
Edit: NVM installed forge and its basically the same thing as A1111 but better lol
A1111 need dev ver
Just go reforge and never go back. It's the same thing just faster and better
rank 1 still
I still think 2.1 was the cleanest stylewise, something started happening after that version with skin especially
yeah the 1.0 vpred version is too shiny
I Love Miyabi. I'll be definitely using it soon.
I was able to train Lora at 1152x2048 (using Kohya_ss GUI, 1536, DIM128/64, PagedAdamW8bit, Alpha Mask, FP8, gradient accumulation steps 1, Batch1, 10000steps, use VRAM 10.3GB) and generate it at 1624x2160 (FP8) and 1520x2048 (FP16). For testing I used a character cut out from the background and it was perfectly recreated with great detail. *1624x2160 will consume VRAM12GB + 47GB of System RAM. (Commit size 80GB)
In the description you said you trained on aibooru outputs. Did you expunge the artist keywords from that dataset or are they still there (invokable)?
All training images are tagged with artist names. The artist tags refer to AIBooru, but as a result much of the dataset was collected from Pixiv and X. And most of it is limited to NAIv3 output.
The text encoder was trained at the extremely low LR, so only a few artist tags seem to work.Also, artist tags are only tagged to prevent saturation and are not intended to work.
Why I can't use it?
doesn't work for me as well, I get a noise everytime...
One word: AWESOME
Thought it was shit but then made my best piece ever.
Also the host image for this model wasn’t even made on it. Probably something incredibly similar and predates it by a few days but not made on vpred 1.0
Cool
This is specifically related to v-pred 1.0
Model seems to not work properly. I tried using the model locally through InvokeAI and haven't gotten anything beyond complete garbled noisy messes.
I was able to get v3.1 working without any of the issues though.
look like you got wrong setting, go to noobaixl page, theres a guide about how to use v-pred model. (workflow included)
Incredible model, the first time I've been truly impressed with local AI image generation (for anime)
Very good model, (in my use) it works better with lora than noob
Incredible fine tune of noob, the Vpred version works way better with lora. It also fixed a lot of issues with vpred noob regarding anatomy.
I think version 3.1 is broken, I don't know, damaged or something. It generates complete nonsense on CPU. No reproducibility on different hardware. For reference: https://github.com/comfyanonymous/ComfyUI/issues/6652
update wen
Does it need one?
@BingusChungus no but his finetunes are always better than the last so yes
he listened Pog
That timing is crazy lmao




