Varun Srinivasan
@v
Interesting misconception that Dan flagged today - people think that we are (or could be) using LLMs for spam detection. With the way LLMs work today, that's like using a hammer to cut your fingernails. LLMs are slow, expensive and don't really have a deep understanding of what spam is in the context of Farcaster. We use a random-forest decision tree that @akshaan designed, and feed it a bunch of signals using embeddings, user actions and graph data.
15 replies
11 recasts
70 reactions
0xLuo
@0xluo.eth
cc @yassinelanda.eth I’m curious to hear your comment on this.
1 reply
0 recast
2 reactions
Yassine Landa
@yassinelanda.eth
@v is spot on LLMs - Spam detection is more of a behaviour analysis exercise as well as usually an unsupervised / clustering / anomaly detection problem. Nevertheless the next-gen models that are « attention based » have so much more classifying power than old method like tree based, with the former giving you a lot of explainability and feedback too.
0 reply
0 recast
1 reaction