Content
@
0 reply
0 recast
0 reaction
Darryl Yeo đ ïž
@darrylyeo
Why not both! ritual.net
1 reply
0 recast
0 reaction
â
@king
thereâs been talks about quilibrium being primed for ai too đ„” Cassie mentioned the scaffolding is already there.
1 reply
0 recast
1 reaction
Darryl Yeo đ ïž
@darrylyeo
Huge if true. Bullish on anything @cassie is working on.
1 reply
0 recast
1 reaction
Cassie Heart
@cassie
Pretty straightforward: the biggest thing needed to scale AI over decentralized secure compute is matrix/vector arithmetic primitives, which is exactly what we use for the mixnet of the network. We're essentially pre-vetting its use case for ML by increasing the analytic privacy of the network.
1 reply
0 recast
0 reaction
Darryl Yeo đ ïž
@darrylyeo
Curious how each AI model would be stored/indexed/referenced under the Quilibrium paradigm?
1 reply
0 recast
0 reaction
Cassie Heart
@cassie
There's a few challenges we still need to solve â first example is that obviously the performance of the primitives will be improved over time (ideally, enhanced with CUDA support). The performance characteristics will necessarily inform the optimal design of how models are translated to fit in the hypergraph store. Hazarding a guess, since hypergraph objects have an inherent limit of 1GB, that because many models will likely be larger than this (a cursory glance at the safetensors files on Hugging Face confirms this), their retrieval and execution will likely be something that needs a separate "always-live" primitive, such that nodes capable of executing the MPC tensor arithmetic will hold these models constantly in memory, which will have a different proof model than the broader disk-sampling VDF approach used for the rest of the network in order to obtain their reward.
1 reply
0 recast
0 reaction