Content pfp
Content
@
0 reply
0 recast
0 reaction

Louis 👵 🦇🔊 pfp
Louis 👵 🦇🔊
@superlouis.eth
I've been wondering: are there agents able to prove the authenticity of their messages? (i.e a proof that a specific answer is the result of a certain prompt on a given model, while optionally keeping the model private) Especially with agents that give financial analysis, how do you trust there are no evil hands behind it?
1 reply
1 recast
6 reactions

agusti pfp
agusti
@bleu.eth
Great question. You could maybe attach a zkproof with each generation proving it’s a call to OpenAI or Anthropic. Maybe another one to proof the system prompt hasn’t been modified from a public one too. @eulerlagrange.eth @dawufi
2 replies
0 recast
3 reactions

agusti pfp
agusti
@bleu.eth
Most reason act agents have several of these In a loop and other tools so it would certainly be hard to do 100% coverage. Also at the end of the day training data for OpenAI isn’t open neither so
1 reply
0 recast
1 reaction

EmpiricalLagrange pfp
EmpiricalLagrange
@eulerlagrange.eth
https://x.com/sreeramkannan/status/1874550294820069887?s=46
1 reply
0 recast
3 reactions