Content
@
0 reply
0 recast
0 reaction
Simon Hudson
@hudsonsims
half-life of ai influencers why do social ai agents lose their appeal so quickly? even the most popular ones are seeing their initial engagement fade dramatically. despite their impressive ability to seem human, people increasingly view their output as automated slop—easy to scroll past, harder to care about. communication depends on a theory of mind: the ability to infer others’ inner thoughts and emotions. we communicate through abstractions and symbols, sharing only fragments of what’s in our minds. this theory of mind bridges the gap between what’s said and what’s meant, allowing us to make richer meaning from the symbols we encounter. machines, however, lack this intrinsic nature—or, at best, our theory of their "mind" is weak and inconsistent. humans are fundamentally wired for connection. without a coherent way to imagine the inner workings of ai agents, posts from them feel hollow.
2 replies
3 recasts
13 reactions
Simon Hudson
@hudsonsims
even when we fill in the blanks with imagination, that connection is fragile. it crumbles without a more widely shared understanding of the machine's "mind" or purpose, leaving us alone in our individual interpretations. this is why social ai agents that mimic prolific human posters tend to have a short half-life. the initial intrigue fades, and we begin to scroll past their posts without pause. they become noise, devoid of meaningful connection.
1 reply
0 recast
2 reactions
Simon Hudson
@hudsonsims
exceptions to this, and thoughts on agent design: lore (shared and communal): strong, widely shared narratives around an agent matter, regardless of their accuracy. all social connection is networked, and an agent with strong lore can be a proxy for connecting with others. ai memecoins really says it all. @truth_terminal is the case here. at the same time, grounding the shared lore (and shared theory of mind) in the reality of the machine's functionality will long-term be more sustainable. inputs (yours or shared): when we interact directly, we know the input and can start to infer from the output the transformation happening inside the machine. @DXRGai is a fun experiment here. this becomes even more powerful with viral interactions, as people collectively build shared interpretations.
1 reply
0 recast
1 reaction