Content pfp
Content
@
0 reply
0 recast
0 reaction

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
Jina Embeddings V2 paper ↓ https://arxiv.org/abs/2310.19923
0 reply
0 recast
1 reaction

gk pfp
gk
@gk
Very good in terms of context and model size but seems like BAAI/bge-large-en-v1.5 (1024 tokens) is still leading the Massive Text Embedding Benchmark (MTEB) overall for now.
0 reply
0 recast
0 reaction