Daniel Barabander pfp
Daniel Barabander
@dbarabander
Some thoughts on agents and "memory."
4 replies
2 recasts
28 reactions

erik nelson pfp
erik nelson
@eriknelson.eth
context is everything and an action only takes place once the context has been useful enough. onchain (action) history serves as a memory layer since txn history could be read, shared, trusted. shared context (RAG layer?) would be immensely valuable. Agents picking up where others left off, updating their context based on other conversations.
0 reply
0 recast
1 reaction

bryce.base.eth pfp
bryce.base.eth
@bap
These agents should sign structured data (attestations) so other agents can decide if they want to trust and build upon it.
0 reply
0 recast
1 reaction

Sid pfp
Sid
@sidshekhar
For the "I have done x, y, z actions" would that be off-chain computations/inferences? I.e I have fetched some info for a person for X use case and they liked it ---> therefore I am good at this use case?
0 reply
0 recast
0 reaction

shoni.eth pfp
shoni.eth
@alexpaden
i think you overcomplicate the explanation a bit but the core is “memory could become a way to prove reputation” yep so i have a few thoughts 1) history is reputation, is what im using farcaster social data for to give to ai agents 2) agents interacting with external entities is extremely important regarding memory simply because attention economics is a war for survival and agents may not survive being blasted with incoming data the agent memory we see right now is primarily rollup (interactions for today), message based (memory/context attached to this cast hash), and simple content search (search all threads). they do not make use of complex or nuanced history yet
0 reply
0 recast
0 reaction