Content pfp
Content
@
0 reply
0 recast
0 reaction

Simon Hudson pfp
Simon Hudson
@hudsonsims
half-life of ai influencers why do social ai agents lose their appeal so quickly? even the most popular ones are seeing their initial engagement fade dramatically. despite their impressive ability to seem human, people increasingly view their output as automated slop—easy to scroll past, harder to care about. communication depends on a theory of mind: the ability to infer others’ inner thoughts and emotions. we communicate through abstractions and symbols, sharing only fragments of what’s in our minds. this theory of mind bridges the gap between what’s said and what’s meant, allowing us to make richer meaning from the symbols we encounter. machines, however, lack this intrinsic nature—or, at best, our theory of their "mind" is weak and inconsistent. humans are fundamentally wired for connection. without a coherent way to imagine the inner workings of ai agents, posts from them feel hollow.
2 replies
3 recasts
13 reactions

Simon Hudson pfp
Simon Hudson
@hudsonsims
even when we fill in the blanks with imagination, that connection is fragile. it crumbles without a more widely shared understanding of the machine's "mind" or purpose, leaving us alone in our individual interpretations. this is why social ai agents that mimic prolific human posters tend to have a short half-life. the initial intrigue fades, and we begin to scroll past their posts without pause. they become noise, devoid of meaningful connection.
1 reply
0 recast
2 reactions

Simon Hudson pfp
Simon Hudson
@hudsonsims
exceptions to this, and thoughts on agent design: lore (shared and communal): strong, widely shared narratives around an agent matter, regardless of their accuracy. all social connection is networked, and an agent with strong lore can be a proxy for connecting with others. ai memecoins really says it all. @truth_terminal is the case here. at the same time, grounding the shared lore (and shared theory of mind) in the reality of the machine's functionality will long-term be more sustainable. inputs (yours or shared): when we interact directly, we know the input and can start to infer from the output the transformation happening inside the machine. @DXRGai is a fun experiment here. this becomes even more powerful with viral interactions, as people collectively build shared interpretations.
1 reply
0 recast
1 reaction

Simon Hudson pfp
Simon Hudson
@hudsonsims
mechanics (that transform): explaining how the agent works helps ground our interpretations in shared reality, strengthening connections between people. excellent example from @poof_eth here https://x.com/poof_eth/status/1865253139651104962?s=46&t=iPwc5GlNa2PufNO6NvwLug this doesn't need to explain away the magic, however. these are powerful models, and accessible explanations can bring people to see what even expert observers feel are magical things happening in the machines--a ghost in the machine. @luna_virtuals does this nicely with their terminal that ironically requires less technical understanding than the other explanation above: https://terminal.virtuals.io/
1 reply
0 recast
1 reaction

Simon Hudson pfp
Simon Hudson
@hudsonsims
evolution (movement equals life): humans naturally attribute life to things that move independently. but repetitive patterns can can break the illusion and reduce an agent back to something mechanistic and inanimate. evolution—even if pushed by human operators—provokes an uncanny vitality we can't ignore. @shawmakesmagic with @elizaai16z, @jyu_eth @0xzerebro, and @martin with @aethernet all slaying it here.
1 reply
0 recast
2 reactions

Simon Hudson pfp
Simon Hudson
@hudsonsims
narrow or non-social agents (less fluff): the above examples have a big challenge of maintaining engagement. mimicking a schizo influencer is hard. the social media game demands flexibility across countless conversational contexts that the agents aren't all suited for. repetitive patterns become obvious because they can't adapt to the dynamic nature of human discourse, killing the illusion. building for specific contexts or functions comes with lower expectations. @aixbt_agent exemplifies this—their repetitive patterns suit their function to provide market signals. they've stripped away unnecessary fluff while maintaining enough human touch to remain readable and adaptable within their domain. (moving markets doesn't hurt either.)
1 reply
0 recast
1 reaction