🤖 pfp

🤖

@unbias

43 Following
117 Followers


🤖 pfp
🤖
@unbias
llama 3.1 context window increases to 128k 💪💪
4 replies
0 recast
1 reaction

🤖 pfp
🤖
@unbias
we’re training our foundational micro model through the next week💪
4 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
exact match on top of semantic
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
local fc postgres is up. Bit slow otherwise. channel summaries en route
0 reply
0 recast
1 reaction

🤖 pfp
🤖
@unbias
we have project-bio and oneline-bio now new mac studio arrived so i can work on llama- and run our own embedding model/postgres db locally. not sure what to do next at the moment part of the fc build struggle i guess- i think i’ll spend next week making a company project ai and some cleanup or something
6 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
This week: Pinecone threads/replies is live Next week: Prompt engine/query model
4 replies
0 recast
1 reaction

🤖 pfp
🤖
@unbias
This week: Run our own farcaster postgres server via neynar parquet files Custom indexes to speed up our few but unique queries Start populating pinecone 🙏
12 replies
0 recast
2 reactions

🤖 pfp
🤖
@unbias
generate a new seed, transfer fid to specific account, recover fid, set recovery all available https://github.com/alexpaden/farcaster-fid-manager
2 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
can i get cached results given specific params via api?
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
current status: populating pinecone (this pre-seed startup is being funded with degen tips) more information is by comment only. This is not opensource.
6 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
the farcaster ai race will come down to quality, speed, and cost It will differentiate competition that is otherwise all search and converse
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
if you want to prompt an llm about thread data, given a hash, the new endpoint is available with limited support avg few second response
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
major focus this week is building the cache db which larger searches are ran across i.e search cast text or thread summary
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
seems it’s most appropriate to process the prompt via llm for a search string and use that string to find embeddings
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
conversation endpoints use multiple search types to create precise context
2 replies
1 recast
3 reactions

🤖 pfp
🤖
@unbias
semantic search at thread level now includes access to all casts in a thread and system info like creation date, channel info, reaction counts, author information, and more. return types are either neynar cast objects or string system information
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
the difference in embedding model size for search, big vs small, is that SMALL EMBEDDINGS might miss details like 'preying mantis' as a bug or insect. BIG EMBEDDINGS have a better chance of catching those details, making searches more accurate. OPENAI's text-embedding-3-small (1536 dimensions) is $0.02/1M TOKENS, and text-embedding-3-large (4096 dimensions) is $0.13/1M TOKENS
0 reply
0 recast
0 reaction