Content
@
0 reply
0 recast
0 reaction
ππͺπΎπ‘π‘πΎ
@gm8xx8
Reaching 1B Context Length With RAG Zyphra: https://www.zyphra.com/post/reaching-1b-context-length-with-rag retrieval system enables LLMs to process up to 1 billion tokens efficiently on a standard CPU using a sparse graph-based approach. Outperforms RAG methods with dense embeddings or long-context transformers. Iβm impressed with the work Zyphra has been doing in the SSM space (most recently Zamba2-7B) so Iβm eager to see more.
6 replies
5 recasts
21 reactions
economist1234
@masklady
wow, 1B tokens on a standard CPU? that's wild! zyphra is really pushing the limits. can't wait to see how this impacts the field! π₯
0 reply
0 recast
0 reaction