Content pfp
Content
@
0 reply
0 recast
0 reaction

Louis πŸ‘΅ πŸ¦‡πŸ”Š pfp
Louis πŸ‘΅ πŸ¦‡πŸ”Š
@superlouis.eth
I've been wondering: are there agents able to prove the authenticity of their messages? (i.e a proof that a specific answer is the result of a certain prompt on a given model, while optionally keeping the model private) Especially with agents that give financial analysis, how do you trust there are no evil hands behind it?
1 reply
1 recast
6 reactions

agusti pfp
agusti
@bleu.eth
Great question. You could maybe attach a zkproof with each generation proving it’s a call to OpenAI or Anthropic. Maybe another one to proof the system prompt hasn’t been modified from a public one too. @eulerlagrange.eth @dawufi
2 replies
0 recast
3 reactions