This package contains 900 images of hands for the use of depth maps, Depth Library and ControlNet.
Usage:
Place the files in folder \extensions\sd-webui-depth-lib\maps
ControlNet Preprocessor: lineart_realistic, canny, depth_zoe or depth_midas
ControlNet Model: Lineart, Canny or Depth
ControlNet Starting Control Step: 0.1~0.2
You could drag the image corner to mirror a hand.
Depth Library:
https://github.com/wywywywy/sd-webui-depth-lib
The images are originated from kamitokatachi.com and all rights are reserved to their respective owners. Please support their site! What I did includes cutting each hand out and making the transparent background such that the images can be read by Depth Library and converted into depth maps by ControlNet. The editing was all done by hand and took quite a while to finish.
License Agreement from kamitokatachi.com:
"The works on this site are licensed under Creative Commons Attribution-NonCommercial 4.0 International License."
Description
FAQ
Comments (45)
感谢大佬分享!
do this in img2img
Thanks for sharing this. But from the folder name, it seems like missing some files. Like 'pp-hand-20', the next one is 'pp-hand-23'. May I ask why?
how do u download stuff from the source site
According to reference images, they are not depth map but 3D rendered models.
Please use ControlNet preprocessors.
@cloudreadypc Depth preprocessors ?
I tested them in img2img, inpaint. Works kinda strange, not the same as the base depth extension hands. Either I have to use maximum denoising strength or UI generates transparent or deformed hands. However, using max denoising ruins the scene.
The stock gestures provided in Depth Library are easier to be recreated probably because there are more reference images with similar hands most base models are trained on. It'd be harder for different viewing angles due to the lack of reference. I could only recommend trying different preprocessors or base models as there're just too many contributing factors.
Slight tip for everyone that wants to use this lib: it's better to just use ONE (1) image and let the ai do the other hand, worked best for me
为什么我添加进maps文件路径下面没有用???
i asked the ai how we can teach it to draw better hands it says we need to train a model based off hand drawing tutorials and many images of hands..so seems like you need an ai hand scraper from the internet and a million hand training. who knows maybe its right
mv pp_hands_01 h01_palm
mv pp_hands_02 h02_fist
mv pp_hands_03 h03_victory
mv pp_hands_04 h04_thumbup
mv pp_hands_05 h05_pointlax
mv pp_hands_07 h07_palmlax
mv pp_hands_09 h09_dpointlax
mv pp_hands_11 h11_salute
mv pp_hands_12 h12_pointgun
mv pp_hands_17 h17_picklax
mv pp_hands_20 h20_clawlax
mv pp_hands_23 h23_okay
mv pp_hands_27 h27_point
mv pp_hands_34 h34_fistlax
mv pp_hands_38 h38_pointthumb
mv pp_hands_43 h43_pick
mv pp_hands_49 h49_wphone
mv pp_hands_50 h50_wgun
mv pp_hands_53 h53_wglass
mv pp_hands_54 h54_wcup
mv pp_hands_58 h58_wphoneplus
mv pp_hands_62 hh62_ponder
mv pp_hands_63 hh63_begging
mv pp_hands_72 hh72_cross
mv pp_hands_73 hh73_square
mv pp_hands_74 hh74_wbook
mv pp_hands_75 hh75_wcontroller
In case there are any other newbs like me, These work perfectly with inpainting img2img. Not very suitable for text2img straight out of the box considering they end up changing the entire image composition even on the same seed.
Btw, i use them with the canny processor most times, and its nearly perfect every time
Step 1. Generate image in text2image, upscaling included
Step 2. Save that image, and load it into the Depth library workspace as background image( make sure you set your proper dimensions)
Step 3. Choose your hand model and place on the background image you just added to the Depth Library tab.
Step 4. send depth library image to Controlnet, now pre-process it and save the output result,
Step 5. send your upscaled image to Inpainting
Step 6 . put the pre-processed output of the hand ( depth, canny, w/e you did) into controlnet on teh image2image page, (i find it works well with the inpainting settings at default denoising ect, surprisingly!)
Final Step!!!! Generate your image and your done! Perfect hand most of the time!
maybe thats obvious, but it wasn't to me at first! So----Img2img ftw! :)
how do i place the hand on the BG image? if i choose a hand model from the list, this one will be sended to the "selected"-field. if i try to drag and drop the hand model over the BG image, nothing happens.
EDIT: got it just after posting my question xD just click on the hand model so it is registerd as selected. after that u just have to click on "add" ^^
I'm having little luck with this method. Perhaps I'm misunderstanding what you mean when you say "process" my Controlnet? As for when I get to step 6, do I put in the same settings I used to generate the original image or do I leave it blank?
Im also wondering what he means by "process" the image as thats not very specific so Im kinda stuck on that part.
@FloorPudding When i say process i mean pre-process ( i'll fix that in instruction) There is a little button that looks like..... an explosion? Idk how to describe it, but that pre-processes your controlnet input and spits out the output that you will save and use.
inpainting is a model?
How can I put right hand?
Drag the corner.
In Vlad (SD-Next) I don't have this path.: \extensions\sd-webui-depth-lib\maps
Where do I put these?
SD.Next is a non-standard webui. There's no guarantee any automatic1111 extension will work.
@cloudreadypc "Non-standard" for what? SD-Next is superior in every way that I've seen so far over Automatic1111 and even supported SDXL before A1111 did, it's worked with every extension except for one that I've tried with it...well, also perhaps another point lost for lacking this folder...but that is not enough reason to downgrade my entire experience generating images.
@clevnumb I agree with you. Just tested SDXL 1.0 and found ComfyUI 10 times faster than Automatic 1111 with the same settings.
@cloudreadypc Cool!
Can this be used with ComfyUI ?
how to make your own version of the hand for depth?
To use this product, you need to download a plugin first
I have this and Control Net installed, and while I can use Control Net, there is no tab for this as shown in the image on Github. It's shown as installed and there is a checkmark next to it indicating it's active (like all other extensions), and I've restarted and shut down SD several times since I installed this, so it's not just that I didn't reload the UI. Is that tab supposed to be there just from this or is it added by something else?
You need the depth lib extension for stable diffusion webui. https://github.com/jexom/sd-webui-depth-lib
Same for me, I installed the extension from the correct URL. Any solutions yet?
In "URL for extension's git repository" enter this extension, https://github.com/jexom/sd-webui-depth-lib.git
Can someone explain me whatt to do? I dont know.
I tried this steps from here: https://github.com/jexom/sd-webui-depth-lib
i wish you can change the shape / fat / thin ness of the hands...
Depth Lib extension are outdated. It crashed my A1111 Stable Diffusion Web UI when I install it.
Hi. Can you update the right hand also? Cuz those solo hands are only left hands. Thanks
You can mirror the hands
Great collection of pp hands. Should be enough to get most jobs done.
can you tell me how to use in ComfyUI?
Unfortunately, depth library doesn't seem to have ComfyUI custom node.
@cloudreadypc 😭😭😭
Trying to add this to ForgeUI. I found the "map" and "extensions" directories. But where ever I extract this zip nothing shows in the "extensions" tab or the "hands" tab in ControlNet. Thoughts, help, ideas? Do you actually dump all 900 hand files in the directory? The zip creates 75 directories with hands in each.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



