Akshay N
@akshayn
We humans still struggle to detect when other humans are lying, yet we're comfortable to blindly trust AI outputs as it is.. In a world where even human truthfulness isn't perfectly verifiable, how can we develop meaningful trust in AI systems? I want to think verifiable compute solves this.. teams like Nillion Network and EIGEN Layer are working to bring this to reality.. But verifiable computation acc to my limited knowledge deals with checking if the computation/process was executed correctly. Aren’t there any edge cases here that we need to look at? I have a dumb example which I used to think about this⬇️⬇️ If my training data contains "All birds can fly" and "Penguins are birds", I might compute (with perfect verification) that "Penguins can fly". The computation is logically sound but the conclusion is false because the initial premise was oversimplified. So does it mean we need to verify if the training data is also correct? Idk
0 reply
0 recast
0 reaction