Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
https://twitter.com/AndrewYNg/status/1736577228828496179 This is one of those perspectives that feels wise but I quite disagree with. Consider Covid in Jan 2020. At that time, was it right to focus on (i) actual current realized harm, or (ii) hypotheticals based on projecting exponential functions? Clearly (ii).
15 replies
10 recasts
123 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
And I do think AI is similar. If we focus on actual harm, then honestly AI has been much less harmful than even I predicted. Like, if you told someone in 2018 about the capabilities of GPT4, SD, etc, I expect they would predict: mass unemployment, mass election interference, mass something involving social biases...
4 replies
0 recast
4 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
And we have seen some degree of bad stuff: deepfakes (both misleading people directly and discrediting real photo/audio evidence indirectly), authoritarian regimes using facial recognition to go after protesters, etc. But it feels on the order of "the kinds of costs that come with any otherwise-mostly-good technology"
3 replies
1 recast
6 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
But it is the projection out to future trends, and specifically to the possibility where the nexus of *high-level planning and intentionality* becomes bot rather than human, that uniquely worries me. We're not there yet, but there's lots of arguments to suggest we're not too far.
4 replies
0 recast
7 reactions

Kalam pfp
Kalam
@kalam
Agree, but pushing back strongly against calls to regulate by companies that spend $millions and need ROI makes sense. People experimenting vs letting fear narratives drive regulation preserves our agency to define new uses and deployments. Essential to nurturing a collective literacy in how we use AI and what for.
0 reply
0 recast
0 reaction