Carlos pfp
Carlos
@lospoy
Just co-authored a whitepaper on a new way to verify AI model execution –without TEEs, ZK proofs, or trusted parties. It’s a mechanism for trustless AI inference in decentralized networks 👇
1 reply
0 recast
0 reaction

Carlos pfp
Carlos
@lospoy
The problem: In decentralized AI networks, how do you know a node is running the right model? Most proposals rely on trusted hardware or expensive cryptography. We propose something simpler and more scalable:
1 reply
0 recast
0 reaction

Carlos pfp
Carlos
@lospoy
Different models (even subtly different ones) leave statistical fingerprints in their outputs. By probing a node with known inputs, you can probabilistically verify what model it’s running –just from an API output :)
1 reply
0 recast
0 reaction

Carlos pfp
Carlos
@lospoy
This opens the door to decentralized AI networks that are actually verifiable. - No need for TEEs - No need for zero-knowledge proofs Just input/output analysis and crypto-economic incentives to keep nodes honest.
1 reply
0 recast
0 reaction

Carlos pfp
Carlos
@lospoy
Probabilistic model attestation could be foundational to decentralized AI. Whitepaper: https://arxiv.org/abs/2504.13443 DMs open if you’re building in this space. – Special thanks to Michael Yuan for the opportunity to collaborate and James Snewin from for their insightful feedback
0 reply
0 recast
0 reaction