Small code optimizations. Removed angle calculation between prompt vector and contextual vector due to high computational load and low efficiency. Contextual vector can be disabled by setting "mean". Added a second output designed to connect to the "negative" input and containing a vector midway between the prompt vector and the contextual vector.
Description
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
Details
Files
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors.safetensors
Mirrors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
IPAdapter_image_encoder_sd15.safetensors
model.15.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
sd15-encoder.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
clip-vision_vit-h.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model-vit-h_2.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
vit-h.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
image_encoder_sd1.5.safetensors
model.safetensors
ipadapter_image_encoder.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
stylemodel.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
IPAdapter_image_encoder_sd15.safetensors
model.15.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIPVision.safetensors
IPAdapter_image_encoder_sd15.safetensors
model.15.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
model.safetensors
model.fp32.safetensors
model.safetensors
model.safetensors
model.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
