Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
😂 not wrong
5 replies
4 recasts
80 reactions

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
Meanwhile: LLM agents not transacting. I think it can both be true that AI has plenty to be excited about that doesn't require thinking about the long tail of payment rail friction, but also that there are real problems they will eventually face that will rediscover many of the things crypto has learned.
1 reply
0 recast
0 reaction

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
They can’t really transact anyway. True interoperability among differently trained or fine-tuned models is… not a thing. You can get them talking in natural language nut not in any deeper latent-space way. And economic transactions can’t happen until clear knowledge transactions make sense, as in models knowing other models know different things. Stuff like zkml is way downstream of these basics.
1 reply
0 recast
0 reaction

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
I can't recall any time I ever transacted with anyone/thing where I first had to establish "clear knowledge ... in a deeper latent-space way" with them. Seems like there is lots of commerce opportunity between entities that don't understand each other.
1 reply
0 recast
0 reaction

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
Humans just call it words like “trust”, “friendship” etc
1 reply
0 recast
1 reaction

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
You don’t think ai agents would be capable of extending trust between each other? Shopping for a subagent that seemed to produce acceptable results in a given other domain?
1 reply
0 recast
0 reaction

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
Not yet The test can’t be based on content since they barely seem able to test their own trustworthiness Also trusting competence and trusting character are different things
1 reply
0 recast
0 reaction

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
Competence, character, maybe accuracy, latency, or some other je ne sais quais of the latent space may all make up an internal trustworthiness assessment, but it all comes out in the transactional wash as vendor switching. We humans are also notoriously bad at gauging our own trustworthiness (Dunning-Kruger), but still somehow manage to trust others to assist our own ends. Sometimes to our fault, and I don’t think we do this so uniquely well that we ought to assume LLMs couldn’t. An interesting prompt either way, making me consider a possible architecture. o1 is great, but I wonder if a bazaar-style self assembling agency couldn’t be competitive with it.
1 reply
0 recast
1 reaction