IP-Adapter plus but fine-tuned on anime images. The result is not perfect but definitely better than the non fine-tuned versions (for anime models).
The model used to train is anything-v5 but it seems to work on other anime models too.
Note that this is not really a Controlnet but the 'other' category does not allow '.safetensors' files.
Original IP-Adapter repository: https://github.com/tencent-ailab/IP-Adapter
My 'regular' IP-Adapter fine-tune for anime can be found here https://civarchive.com/models/302691
The 'plus' version of the IP-Adapter produces the correct character more often than the regular version but it also tends to copy the position/pose. This could be useful for e.g. upscaling.
Description
fine tuned on a subset of the danbooru17 dataset
FAQ
Comments (7)
Very good result with anime character.
This model is simply amazing! I'd like to know how much data was used and how long it took to train it...
I can't seem to get it working but can use the official ip adapter model just fine. What preprocessor should I use? In the tutorial you posted they are using a preprocessor called ip-adapter_clip_sd15 but I don't have that in my controlnet. What I have are; ip-adapter_auto, ip-adapter_clip_g, ip-adapter_clip_h rest are for faceid and xl.
Or does it only work with XL models? I am using a SD 1.5 based model.
The clip model used is the clip-vit-h model. I assume that would be "ip-adapter_clip_h".
