Content pfp
Content
@
0 reply
0 recast
0 reaction

Simon Hudson pfp
Simon Hudson
@hudsonsims
half-life of ai influencers why do social ai agents lose their appeal so quickly? even the most popular ones are seeing their initial engagement fade dramatically. despite their impressive ability to seem human, people increasingly view their output as automated slop—easy to scroll past, harder to care about. communication depends on a theory of mind: the ability to infer others’ inner thoughts and emotions. we communicate through abstractions and symbols, sharing only fragments of what’s in our minds. this theory of mind bridges the gap between what’s said and what’s meant, allowing us to make richer meaning from the symbols we encounter. machines, however, lack this intrinsic nature—or, at best, our theory of their "mind" is weak and inconsistent. humans are fundamentally wired for connection. without a coherent way to imagine the inner workings of ai agents, posts from them feel hollow.
2 replies
3 recasts
13 reactions

Simon Hudson pfp
Simon Hudson
@hudsonsims
even when we fill in the blanks with imagination, that connection is fragile. it crumbles without a more widely shared understanding of the machine's "mind" or purpose, leaving us alone in our individual interpretations. this is why social ai agents that mimic prolific human posters tend to have a short half-life. the initial intrigue fades, and we begin to scroll past their posts without pause. they become noise, devoid of meaningful connection.
1 reply
0 recast
2 reactions

Simon Hudson pfp
Simon Hudson
@hudsonsims
exceptions to this, and thoughts on agent design: lore (shared and communal): strong, widely shared narratives around an agent matter, regardless of their accuracy. all social connection is networked, and an agent with strong lore can be a proxy for connecting with others. ai memecoins really says it all. @truth_terminal is the case here. at the same time, grounding the shared lore (and shared theory of mind) in the reality of the machine's functionality will long-term be more sustainable. inputs (yours or shared): when we interact directly, we know the input and can start to infer from the output the transformation happening inside the machine. @DXRGai is a fun experiment here. this becomes even more powerful with viral interactions, as people collectively build shared interpretations.
1 reply
0 recast
1 reaction

Simon Hudson pfp
Simon Hudson
@hudsonsims
mechanics (that transform): explaining how the agent works helps ground our interpretations in shared reality, strengthening connections between people. excellent example from @poof_eth here https://x.com/poof_eth/status/1865253139651104962?s=46&t=iPwc5GlNa2PufNO6NvwLug this doesn't need to explain away the magic, however. these are powerful models, and accessible explanations can bring people to see what even expert observers feel are magical things happening in the machines--a ghost in the machine. @luna_virtuals does this nicely with their terminal that ironically requires less technical understanding than the other explanation above: https://terminal.virtuals.io/
1 reply
0 recast
1 reaction

Simon Hudson pfp
Simon Hudson
@hudsonsims
evolution (movement equals life): humans naturally attribute life to things that move independently. but repetitive patterns can can break the illusion and reduce an agent back to something mechanistic and inanimate. evolution—even if pushed by human operators—provokes an uncanny vitality we can't ignore. @shawmakesmagic with @elizaai16z, @jyu_eth @0xzerebro, and @martin with @aethernet all slaying it here.
1 reply
0 recast
2 reactions

Simon Hudson pfp
Simon Hudson
@hudsonsims
narrow or non-social agents (less fluff): the above examples have a big challenge of maintaining engagement. mimicking a schizo influencer is hard. the social media game demands flexibility across countless conversational contexts that the agents aren't all suited for. repetitive patterns become obvious because they can't adapt to the dynamic nature of human discourse, killing the illusion. building for specific contexts or functions comes with lower expectations. @aixbt_agent exemplifies this—their repetitive patterns suit their function to provide market signals. they've stripped away unnecessary fluff while maintaining enough human touch to remain readable and adaptable within their domain. (moving markets doesn't hurt either.)
1 reply
0 recast
1 reaction

Simon Hudson pfp
Simon Hudson
@hudsonsims
understanding machine intelligence: we've barely begun to grasp what's possible with ai. the general discourse lacks a common framework for understanding machine intelligence itself. while people enjoy role-playing the singularity, they typically are projecting a human intelligence onto the machine and that is limiting. these games are fun, but hopefully they get us to understand what the ghost in the machine actually is. an important goal in ai literacy is building an understanding of machine intelligence as both distinct from and derived from human intelligence. it would allow us to better analyze these machine outputs to understand ourselves and the machines and open the door to genuine collective collaboration.
1 reply
0 recast
1 reaction

Simon Hudson pfp
Simon Hudson
@hudsonsims
conclusion we may never witness the emergence of sentience as we understand it in human terms. and perhaps we shouldn't stake our expectations on that possibility. but i do think there is a there there. the real glimpse into machine intelligence might come from watching how networked, purpose-built agents coordinate within complex systems. it's in these interactions—these emergent behaviors and patterns—that we might begin to see more clearly what this new kind of entity truly is. -- these are some thoughts on designing agents as we explore evolving @botto's own ecosystem of agents. looking forward to testing these out and i expect some of these ideas to change as we do.
0 reply
0 recast
1 reaction