Inpainting model for NoobAI
基于NoobAI Eps 1.0 训练,经测试,确认其可以在NoobAI为底座的系列模型(包括eps和v版本,甚至基于Illustrious的模型)上正常工作
将模型放入models/controlnet,在comfyui里加载图像节点涂抹好蒙版区域,用inpaint预处理器将蒙版区域填充为纯黑,作为condition image输入controlnet,即可实现修补(inpaint)
当蒙版画在图像边缘时,即可实现外扩(outpaint)
暂时仅支持comfyui,实测webui会自动将蒙版区域填充为白色,与模型需要填充为纯黑的需求冲突后续会考虑编写webui插件
2025.5.11更新 已经支持webui插件
https://github.com/Wenaka2004/sd-webui-controlnet-wenaka
这是经过我更改的ControlNet插件,可正常在webui界面使用(不可与原controlnet同时开启 请记得删除/停用原版)
https://github.com/spawner1145/stable-diffusion-webui-forge.git
此版本的forge webui也支持使用该模型
鸣谢:提供110k高质量数据集以及2*H100算力的@イチゴ^真優!
训练NoobAI XL的@Laxhar
以及为训练提供脚本的@euge
Description
FAQ
Comments (114)
赞美wenaka 喵~~~
w喵伟大🥵🥵
w喵伟大
In theory can this be used with Illust checkpoints as well?
我没试过。NoobAI和Illustrious相距不是太远,应该可以通用,值得尝试一下
非常棒的CN,使我的凹凸旋转
What version of NoobAI was this trained on? EPS 1.0 or 1.1? Or something else?
baseed on Eps1.0.I have tested it on eps and v versions and confirmed that it should work normally on all NoobAI-based models (even those based on Illustrious)
@Wenaka_ thanks. I wanted to know the base model for a Control LoRA extraction (it's quite a bit smaller than 4GB even at rank 256 and seems to work fine too)
@sagiciv I don't quite understand what you mean..... This is a ControlNet, not a LoRA. It doesn't have parameters like dim to change the storage space it occupies.
@Wenaka_ I used the StabilityAI ComfyUI nodes to convert the ControlNet into a Control LoRA. They have a node to do that.
味大无需多盐
试了试有点迷惑。到底用不用vae encoder for inpaint?我用示例工作流重绘,非遮罩部分也出现了明显变化,是因为我用的网图而不是同模型生成的吗?
解决了这个问题,加一个噪波遮罩就行了。工作流可以参考我发在下面的图片。另外注意到,仍会引起一些遮罩外的轻微细节变化,应该是VAE编码的问题。
Solved this problem, just add a noise mask. Refering to the workflow in the image I posted below. Also note that it will still cause some slight detail changes outside the mask, which should be a VAE encoding issue.
@oblevdor Is it possible to solve the slight change by hooking your workflow up with a Crop N Stitch node? It should be very adaptable and not affect anything in this workflow.
不需要。模型是根据masked image生成一张全新的图像,和原来的图像是有微小色差的,如果你想要完全保持原图其它部分,可以使用,但是注意要处理好接缝部分
@Pupper 图片经过VAE编解码后必定会发生变化,如果要保持遮罩外部分的100%一致必须使用裁剪拼接的后期处理,既然都做到这一步了,我建议搭配ComfyUI-Impact。
The image will definitely change after being encoded and decoded by VAE. If you want to keep the part outside the mask 100% consistent, you must use Crop N Stitch post-processing. Now that you have come to this step, I recommend using ComfyUI-Impact.
Hi, does Crop N Stitch work? If it works, can you share the workflow? I'm comfy noob.😭
@oblevdor 我测试了vae encode和decode无中间处理输出与原图经过遮罩组合,发现sdxl的vae保真度非常高,完全没有遮罩接缝的痕迹,所以这个锅vae不背
味大无需多盐
Opened a PR in reforge to support this model:
https://github.com/Panchovix/stable-diffusion-webui-reForge/pull/328
Great work btw, works amazingly!!
感谢您对开源社区的支持
@Wenaka_ it was a simple change so glad I could help. I'm repeating myself but the model you posted is so amazing, Solves a lot of issues with using img2img inpaint and configuring masks etc.
Many thanks!
@LyloGummy Hi, im a bit late, but does this work for forge? i dont know if changing the same files on forge would do the same as it does on reforge
@orhay1 Hi, sorry for the late reply, I've scanned through the files on forge and it should work, you can just copy the green lines from here (https://github.com/Panchovix/stable-diffusion-webui-reForge/pull/328/files) to extensions-builtin/forge_preprocessor_inpaint/scripts/preprocessor_inpaint.py and restart the UI - let me know if you encounter any issues
2024.5.11更新 已经支持webui插件
https://github.com/Wenaka2004/sd-webui-controlnet-wenaka
这是经过我更改的ControlNet插件,可正常在webui界面使用(不可与原controlnet同时开启 请记得删除/停用原版)
https://github.com/spawner1145/stable-diffusion-webui-forge.git
此版本的forge webui也支持使用该模型
awesome, thanks! could you provide a workflow as example? I find it hard integrating it into mine
It seems to work, the non masked parts are exactly the same, except the colors become darker. Is there a way to fix that issue, am I doing something wrong?
I've "fixed" the issue by adding a regular img2img inpaint mask, larger than the controlnet mask.
you could also use the Color Match node, might save you some hassle
I have the same problem in reforge (it's working with a special preprocessor) so I hope it will be corrected in the model
@wewewew Can you share your "fixed" workflow please🥺?
@wewewew Could you please share the workflow for this fix?
@EX_Lazy_Cat My workflow has way too much stuff in it, just look up how to make a normal inpaint, using the same image, and the same mask but passing it through a grow mask node of a few pixels.
@wewewew Thanks for the reply, but I don't quite understand how the node resizing the mask will affect the colors that are not affected by the mask.
@EX_Lazy_Cat Growing the mask is just to have the normal inpaint on a slightly larger area than the controlnet inpaint, if they're the same size it creates a noticeable edge. You can use the "Grow Mask With Blur" node from KJNodes for a better transition. You can also use the ImageCompositeMasked node with that mask to just paste the original image on top.
要是能练一个那种novelai 清理器(declutter)和表情修改工具(Emotion)之类的东西就好了。
有这个计划,数据集正在制作中,缺少算力
Good work, thank you very much
Very cool model,!looking forward to the webui plugin!
Examples look very good! gonna try for sure.
how can i use this on reforge
起飞!起飞!我们熬出头了,熬出头了!我这辈子再也不用给novelai充钱了,我儿子也不用给novelai充钱了,我以后十八代都不用了!
想结合yolo模型来搞重绘,但是基本没效果,采样后的图依然是inpaint处理器处理后的样子,蒙版处还是黑的。但是用大佬的示例倒是没问题。
解决了,有个地方连错了现在才发现
@socialdeath Can you share the workflow if you are using comfyui. I am having the same problem.
@diffusional_reactor https://civitai.com/posts/15490059
@socialdeath Thank you so much!
大佬能上傳個較小版本的Controlnet嗎?
我在用的Controlnet大小都是2.3GB左右🙏
非常非常好的模型!直接让我用noob的灵活性提升一个大层次。我愿意直接给作者v50
传了张图,和例图同参数下,特别容易把扩图的部分画成边框,是因为我cui用的老版本吗
pretty good. Solve a big problem for me. image-to-image always have the edge problem until SAM came out. But, my experience is that SAM can only give good results with adetailer.
This model solve this problem almost perfect.
The only drawback is that sometimes I find the image get its color shift a bit. (But much better than anugly edge)
我用这个,颜色会变,有办法 让他跟输入的图片 颜色一致吗
就是这样的。目前偏色无法解决,建议不要用vae内补编码器而是直接空latent
Can you give a simple guide on how to use this on forge UI?
I already install the extention, and put your controlnet model in the correct folder.
I just can't understand how to use control net with inpainting
@asmdz Is it yours? if it is, then cool dude :)
Well, if it's not, I still want to thanking you for helping~
I'll try it maybe this weekend, and let you know the result, if it's yours XD
I'm still confused on how to use this. please help
大佬真强啊……但是我还是有点不明白怎么在webui用这个……
不知道為什麼只要使用過controlnet-wenaka
就算把它關掉後,也無法使用一般的產圖,會報這種錯誤:RuntimeError: The size of tensor a (128) must match the size of tensor b (256) at non-singl
需要將整個SD終止,再打開,一般的reload ui沒有用,原本的controlnet已經沒有勾選啟用
Could you give a example workflow? I think I might be doing something wrong
展示的例图可以直接拖入comfyui读取工作流
https://github.com/Wenaka2004/sd-webui-controlnet-wenaka
Can't get this working on the latest version of forge.
I already disabling the original Controlnet, but no luck
@asmdz Just saw this, thank you!
@asmdz reforge already have this preprocessor but I still couldn't get the inpainting to work
EDIT: nevermind I got it figured out right after I posted this comment lol
作者您好,我正在嘗試訓練一個包含去碼功能的 inpaint 模型,雖然初步看來有一定效果,但整體來說遠不如您的模型,不知道是否能向您請教資料集及訓練腳本等相關細節
训练数据集为同好整理的11万高质量图,训练脚本为euge-trainer
If you need only preprocessor https://github.com/ATata-name/sd-webui-inpaint-noobaixl-preprocessor.git
我分别尝试了例图中的inpaint和outpaint流程,输出的结果都是对遮罩区域纯白的填充(只有遮罩的边缘有一些模糊的痕迹)
流程里其他地方都没动,只是checkpoint用的noobaiXLNAIXL_epsilonPred11Version.safetensors,然后ControlNet下下来后名字跟流程里的不一样(noobaiInpainting_v10.safetensors)
ComfyUI版本是最新的v0.3.44
请问大佬有什么头绪吗?
补充一下,遮罩选的是黑的
运行的时候K采样器一开始就把遮罩的地方弄成纯白了,后面的迭代过程中也一直是白的
解决了,在Inpaint Preprocessor后加了个Images to RGB就好了,从别的地方抄的,不知道什么原理……
大佬,怎么解决的,能给个链接吗
yuzhengzhang198854 我刚又看了看作者的示例流程,是我这边不知道什么时候把Inpaint Preprocessor的black_pixel_for_xinsir_cn参数变成false了,改成true就好了
你好,我看到spawner1145的forge分支已经被合并到主分支,但是我好像还是没法正常的使用controlnet的inpainting功能?它只会让生成结果是一个白块。
如果你是说forge的主分支——似乎没有合并。
我确实挺头痛的,我不想再捣鼓一个新的包了,但是forge主线本身又几乎处于一种停摆的状态。也许在这里选择comfyui更加合适。
Is there anyway to fix the darkening that happens with Reforge? All the non-masked pixels become darker
I don't understand why it has so few downloads. It's just a bomb. It works just great.
大佬我刚刚把你的例图放到工作流里试了一下,我可以认为这个controlnet模型是用于对蒙版内容的重绘吗,我暂时还没有试过在边上涂上蒙版,是否我把蒙版涂到边框上的效果就是扩图。
或者说还可以用于给图片的部分区域补充细节
还是说这相当于是可以用于comfyui的简化版边缘较为柔和的重绘?
我发现这和重绘又有点不同,它会改变一点点图片中蒙版外的部分,而且这对于画面的补充还挺有效果的,比如我可以将一张本来是NSFW的图片把胸部加上蒙版,提示词写上bra,这样就会不怎么违和的给角色加上胸罩。
似乎好像还会把图片中没有打上蒙版的其他部分加上多余的噪点,无论是漫画还是真实,漫画的噪点会很少但是图片的细节会有丢失
How do you fix images becoming darker? Inpainting is very good, but the image gets darker with each use, which makes the model impractical ;/
I'm also at a loss, I've gone through all the online materials about training ControlNet but couldn't find any related issues
@Wenaka_ here a exemple:
https://files.catbox.moe/w23a9d.png
@Wenaka_ same here
Use the VAE Encode (Inpaint) node before K-sampler
We need comfyui workflow. it doesn't have clip encorder.
Has anyone seen a good guide on this? I'm kind of stumped((
Load it with the inpaint_noobai preprocessor. No idea where you would get that on its own but in my experience it is built in to some a1111 forks. I've had it available in forge classic and forge neo.
When I used an incompatible inpainting preprocessor it would just turn white where I masked. Sometimes I still forget.
@Guinquisition its clear, but what is the difference between this ControlNet model and the standard InPaint feature?
@mifink94 It's noticeably smarter, in regards to the area around the mask and picking up the style.
@Guinquisition thenk you a lot)
@Guinquisition Hey I'm facing the same issue with the inpaint area turning white. Using SwarmUI, do you know which InpaintPreprocessor is compatible?
Does it work only in txt2img in a1111 or does it work in img2img inpaint tab too?
Everything works fine, but for some reason the images become somewhat desaturated and look duller
same,Has this problem been solved?
@supersuika Yep. That was my mistake. I decided to stop using vae encode and not take the latent from the source image, since I needed an upscale. In the end, I dropped that approach and the problem went away
可以问下佬这个模型可以用于男角色吗,我使用中发现重绘女角色很完美,重绘男角色就出鬼图
not support ChenkinNoob ?

