Content
@
0 reply
0 recast
0 reaction
ππͺπΎπ‘π‘πΎ
@gm8xx8
Jina Embeddings V2 paper β https://arxiv.org/abs/2310.19923
0 reply
0 recast
1 reaction
gk
@gk
Very good in terms of context and model size but seems like BAAI/bge-large-en-v1.5 (1024 tokens) is still leading the Massive Text Embedding Benchmark (MTEB) overall for now.
0 reply
0 recast
0 reaction