Varun Srinivasan
@v
Interesting misconception that Dan flagged today - people think that we are (or could be) using LLMs for spam detection. With the way LLMs work today, that's like using a hammer to cut your fingernails. LLMs are slow, expensive and don't really have a deep understanding of what spam is in the context of Farcaster. We use a random-forest decision tree that @akshaan designed, and feed it a bunch of signals using embeddings, user actions and graph data.
15 replies
5 recasts
54 reactions
MetaEnd🎩
@metaend.eth
LLM could work pretty well, with human training for fc specifics. In terms of costs - if you use the account sign up fee, every account would basically pay for their own LLM monitoring
0 reply
0 recast
0 reaction