Kerim Kaya pfp

Kerim Kaya

@kerimkaya

48 Following
21 Followers


:omer pfp
:omer
@omer
ICYMI - Wikipedia on Dria https://seemore.tv/creatortools/fcthreads/11913?h=0x97c32b463dbfd086b0cbe46adbeb232a7d668038
0 reply
2 recasts
3 reactions

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
How to Use? Dria can be used with the Dria CLI. You only need to have Node and Docker installed, then you simply: dria fetch uaBIB4kh7gYh6vSNL7V2eygfbyRu9vGZ_nJ6jKVn_x8 dria serve wikipedia.20220301.en https://github.com/firstbatchxyz/dria-cli
0 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
The Wikipedia.en Index is not just a tool; it's a gateway to enhancing local LLMs, including fine-tunes of corporate giants like Llama-70B, Mistral7B or Stable LLM. Stay tuned for benchmark evaluations!
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
- Enabled combining vectors and keywords for precise results. - Free, permanent availability on decentralized storage. - All packed in a dead-simple CLI `dria serve`
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
You can unlock the full potential of local LLMs, bringing a large external knowledge corpus right to your local environment, working on HDDs instead of RAM. - Embedded entire articles, not only titles with bge-large-en-v1.5 - Summarized Long articles for better retrieval
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
We witnessed one of the largest on-chain transaction ever - 10 million vector storage and search queries in just 24 hours. Entire Wikipedia as Smart Contract Public RAG model, Embeddings Lives On-Chain.
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
With all the LLM stack, it is easier than ever to build dynamic interest-based feeds, and super accurate semantic search. It's time to build next-generation search and feeds on Farcaster with the increase in content. What would you like to see included in Farcaster feeds and searches?
0 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Still need to accelerate towards 7T…. Sam Altman will airdrop 1$ to each 7 trillion on-chain users
0 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Would love to help with anything ( working on language models since 2015)
0 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Great work! Where do you store embeddings, and what vdb you’re using for RAG?
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Do you think LLMs have a deep understanding of Ethereum? The data says the accuracy rate of even the top LLMs is super low. We’re creating a monumental Crypto Corpus for any LLM to keep up with Ethereum daily. ( Will be available on Hugging Face) Ping me if you want to contribute
0 reply
1 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Running world knowledge via decentralized or local/permissionless APIs is the future. Anyone should be able to create a knowledge base for RAG / Fine-Tuning, and it is essential to have decentralized/open benchmarking to understand which knowledge's enhancement on LLMs is verified / proof-based.
0 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
@launch
1 reply
0 recast
0 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Visit https://www.firstbatch.xyz/blog/dria for additional information and upcoming developments.
0 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Anyone can run RAG models locally through smart contracts, enabling permissionless access to world knowledge. Dria stores all of the world's knowledge into a public ledger called @ Arweave, a decentralized storage network designed to offer a platform for permanently storing data.
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Users worldwide can contribute valuable knowledge to shared RAG knowledge in an environment where knowledge uploaders can earn rewards for the value of their verifiable work.
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Dria's zero technical mumbo jumbo approach allows everyone to contribute knowledge to LLMs. Drag & Drop Public RAG Model effortlessly transforms knowledge into a retrievable format with an intuitive drag-and-drop upload feature.
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Dria modernizes AI interfacing by indexing and delivering the world's knowledge via LLMs. Dria's Public RAG Models Democratize knowledge access with cost-effective, shared RAG models. Today, Dria is able to handle Wikipedia's entire database and annual 56 billion traffic efficiently at just $258.391.
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
Dria is an Open Source collective Knowledge Hub. Knowledge Hub consists of multi-region and serverless public vector databases called knowledge, curated from PDF, MP3, CSV, or other file types. It is fully decentralized and every index is available as a smart contract, making each Knowledge public and permanent.
1 reply
0 recast
1 reaction

Kerim Kaya pfp
Kerim Kaya
@kerimkaya
We launched dria.co, The Wikipedia of AIs: Dria is the Decentralized Knowledge & Open Connectivity Interface between Humans and Machines. You can find @vitalik.eth ‘s cast’s as a permissionless vectordb, universally retrievable by any AI app. https://dria.co/knowledge/db9dJtuG-UYYxEJC0wmqKKReLfU5NbuIzmoMkDgd0G0
2 replies
1 recast
1 reaction