Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction
shoni.eth
@alexpaden
Identity AI is about giving LLM-based agents social/professional identities so they behave like real people—allying when it helps, deceiving if needed, forging trust or rivalry. In multi-agent environments, think stochastic games where each ‘state’ is who has resources & alliances, and the game transitions randomly. Real workplaces & social platforms are ‘general-sum’: not purely zero-sum but a mix of competition + cooperation. Now, a big new approach is multi-agent inverse reinforcement learning (MIRL). Researchers (Lin, Adams & Beling) have ways to guess 'hidden motives'—the agent’s payoff structure—just by watching how it acts. That’s huge if you want to see whether an agent’s “cooperative persona” is real or just a veneer. stochastic games explainer:https://www.youtube.com/watch?v=_Fq8_Jg25pY
3 replies
0 recast
3 reactions
shoni.eth
@alexpaden
Why does MIRL matter for identity-based AI? Each identity is basically a reward function behind the scenes—cooperative folks get a dopamine-like reward from group success, while more aggressive ones chase personal advantage. The new MIRL paper covers multiple solution concepts: from cooperative or correlated equilibria (the 'we’re in this together' style) to adversarial or coordination equilibria (the 'winner-takes-all' or 'we do it in sync' style). This is crucial because an agent may claim 'I’m a team player!' but its actual policy might be adversarial. With MIRL, we can invert the policy to see if the real reward function is all about self-gain. Or if it’s consistent with that persona.
1 reply
0 recast
1 reaction
shoni.eth
@alexpaden
>>> AGENT = ENTITY = HUMAN
0 reply
0 recast
0 reaction
neon
@neonrover
so you’re saying the ai will know who i should be friends with?
0 reply
0 recast
1 reaction