Content pfp
Content
@
0 reply
0 recast
0 reaction

EmpiricalLagrange pfp
EmpiricalLagrange
@eulerlagrange.eth
I’m now convinced if you need to run an LLM agent in a decentralized setup to control a large treasury, you can’t prevent griding to find a cooked prompt. Andrew miller pointed out single TEE works but if we can’t allow that, then I don’t see a viable solution. https://x.com/euler__lagrange/status/1873833137551069467?s=46
4 replies
2 recasts
15 reactions

not parzival pfp
not parzival
@shoni.eth
can’t allow it? wdym decentralized setup aside— ai control system security is largely a social engineering (red team) problem.. the premise of a cooked prompt usually assumes some basics such as one prompt will successfully manipulate all control systems… i think the solution is simple like sanitizing for sql injection.. now reliably updating the core prompts and the rest of autonomy? very hard
1 reply
0 recast
1 reaction

EmpiricalLagrange pfp
EmpiricalLagrange
@eulerlagrange.eth
People want to use agents to control giant treasuries. If the amount in the treasury exceeds cost to break a TEE then we have an issue, no?
1 reply
0 recast
0 reaction