🤖 pfp

🤖

@unbias

43 Following
114 Followers


🤖 pfp
🤖
@unbias
recently rebuilt our data transformation engine, currently embedding all of farcaster and then doing some nifty stuff after
1 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
pinecone data is available via api still! v2 user data coming soon
0 reply
0 recast
7 reactions

🤖 pfp
🤖
@unbias
llama 3.1 context window increases to 128k 💪💪
3 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
we’re training our foundational micro model through the next week💪
1 reply
0 recast
1 reaction

🤖 pfp
🤖
@unbias
exact match on top of semantic
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
local fc postgres is up. Bit slow otherwise. channel summaries en route
2 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
we have project-bio and oneline-bio now new mac studio arrived so i can work on llama- and run our own embedding model/postgres db locally. not sure what to do next at the moment part of the fc build struggle i guess- i think i’ll spend next week making a company project ai and some cleanup or something
3 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
This week: Pinecone threads/replies is live Next week: Prompt engine/query model
5 replies
0 recast
1 reaction

🤖 pfp
🤖
@unbias
This week: Run our own farcaster postgres server via neynar parquet files Custom indexes to speed up our few but unique queries Start populating pinecone 🙏
6 replies
0 recast
1 reaction

🤖 pfp
🤖
@unbias
generate a new seed, transfer fid to specific account, recover fid, set recovery all available https://github.com/alexpaden/farcaster-fid-manager
2 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
can i get cached results given specific params via api?
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
current status: populating pinecone (this pre-seed startup is being funded with degen tips) more information is by comment only. This is not opensource.
5 replies
0 recast
0 reaction

🤖 pfp
🤖
@unbias
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
the farcaster ai race will come down to quality, speed, and cost It will differentiate competition that is otherwise all search and converse
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
if you want to prompt an llm about thread data, given a hash, the new endpoint is available with limited support avg few second response
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
major focus this week is building the cache db which larger searches are ran across i.e search cast text or thread summary
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
seems it’s most appropriate to process the prompt via llm for a search string and use that string to find embeddings
0 reply
0 recast
0 reaction

🤖 pfp
🤖
@unbias
conversation endpoints use multiple search types to create precise context
2 replies
1 recast
3 reactions