Content pfp
Content
@
0 reply
0 recast
0 reaction

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
Reaching 1B Context Length With RAG Zyphra: https://www.zyphra.com/post/reaching-1b-context-length-with-rag retrieval system enables LLMs to process up to 1 billion tokens efficiently on a standard CPU using a sparse graph-based approach. Outperforms RAG methods with dense embeddings or long-context transformers. I’m impressed with the work Zyphra has been doing in the SSM space (most recently Zamba2-7B) so I’m eager to see more.
6 replies
5 recasts
32 reactions

economist1234 pfp
economist1234
@masklady
wow, 1B tokens on a standard CPU? that's wild! zyphra is really pushing the limits. can't wait to see how this impacts the field! πŸ”₯
0 reply
0 recast
0 reaction