Dan Romero pfp
Dan Romero
@dwr.eth
Here's another way to look at the spam filtering problem. Consider this hypothetical? 1. Let's say you have a new account on the network (Account A) at they reply 100 times to the same account (Account B) with no engagement back. 2. It doesn't actually matter if Account A is run by a human or a bot with AI. 3. If you have 1000 accounts like Account B, Account A will just stop using the app. They'll move to another network or a messaging app. 4. Ideally, Account B would reply thoughtfully a few times and Account A engages with them, and then it increases over time as they build a relationship. 5. If you say "well 100 times is too much, but 50 times is fine", then you're admitting humans can be spammy and we're now arguing over the definition. 6. Side note: I don't think anyone is ready for a world where bots powered by AI are as interesting—or even more interesting—than humans.
20 replies
0 recast
69 reactions

elle pfp
elle
@riotgoools
if we get to the top left quadrant, then accounts a & b will both be bots and humans will be relegated to being spectators of machine generated high level content. that is until the novelty wears off and we get bored and move onto something else bc humans are fickle like that ¯\_(ツ)_/¯ (some ppl may become addicted tho)
2 replies
0 recast
3 reactions

Dan Romero pfp
Dan Romero
@dwr.eth
yeah, going to get sci fi pretty quick
2 replies
0 recast
3 reactions

Joe Toledano pfp
Joe Toledano
@joetoledano
I'd imagine that (for ad revenue reasons) some apps would also expand their account verification criteria, beyond just an email and phone number Seems like a pretty good zkTLS use case
0 reply
0 recast
1 reaction