Nadav pfp
Nadav
@nadav
The emergence of a test-time compute overhang in scaling reasoning models implies a surprising importance to decentralized inference. If the most useful and insightful things AIs can do require enormous amounts of inference compute, it means the highest leverage work output in the world is censorable (or worse, abusable) by an incredibly small number of people.
2 replies
30 recasts
142 reactions

Nadav pfp
Nadav
@nadav
In other words, I always thought “decentralized AI” was maybe a little bit of a solution looking for a problem because of OSS models like Llama. But that might no longer be the case
0 reply
0 recast
6 reactions