horsefacts đźš‚ pfp
horsefacts đźš‚
@horsefacts.eth
Conversational agents seem like a great distribution hack. They distribute themselves in the feed, and they are easy to port to other platforms that promise even more eyeballs. But most of them have a problem: they are antisocial! My relationship to most agents is 1 to 1: it's just me and the bot yapping back and forth. This is solipsistic, not social. Now that the novelty of in feed agents has worn off, this is not interesting content for a social network. What was the last bot thread you read in any depth? The distinguishing feature of social networks is the social graph, and interactions between humans are much stickier than interactions with bots. If your agent isn't social by design, the extra distribution isn't worth much.
14 replies
4 recasts
59 reactions

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
Agree, but with a few important caveats: 1. The novelty has only worn off for 0.1% of people. People I talk to about Bracky who are not in crypto have “eyes light up” moment around the idea of betting with an agent in feed 2. I think you are underrating that agents thus far can not even be solipsistic well! They don’t remember anything about you! 3. The potential for agents to create and distribute social interactions is huge, but both of these former two steps come first
2 replies
0 recast
6 reactions

shoni.eth pfp
shoni.eth
@alexpaden
agents can memorize anything you want. simply never been built here. should have been where all the $$ went.
1 reply
0 recast
1 reaction

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
Agree. But what I am meaning is that budgeting your given token budget (at scale) to create and resurface user memories — to a degree of resolution which feels real — is unsolved at a general level. Each app that does it does it itself And then extending this to allow not only for user contexts but tor social contexts first within thread and then across thread and time is a 10x of the same problem
1 reply
0 recast
1 reaction