Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
https://twitter.com/AndrewYNg/status/1736577228828496179 This is one of those perspectives that feels wise but I quite disagree with. Consider Covid in Jan 2020. At that time, was it right to focus on (i) actual current realized harm, or (ii) hypotheticals based on projecting exponential functions? Clearly (ii).
15 replies
10 recasts
123 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
And I do think AI is similar. If we focus on actual harm, then honestly AI has been much less harmful than even I predicted. Like, if you told someone in 2018 about the capabilities of GPT4, SD, etc, I expect they would predict: mass unemployment, mass election interference, mass something involving social biases...
4 replies
0 recast
4 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
And we have seen some degree of bad stuff: deepfakes (both misleading people directly and discrediting real photo/audio evidence indirectly), authoritarian regimes using facial recognition to go after protesters, etc. But it feels on the order of "the kinds of costs that come with any otherwise-mostly-good technology"
3 replies
1 recast
6 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
But it is the projection out to future trends, and specifically to the possibility where the nexus of *high-level planning and intentionality* becomes bot rather than human, that uniquely worries me. We're not there yet, but there's lots of arguments to suggest we're not too far.
4 replies
0 recast
7 reactions

🤢🤢🤢 pfp
🤢🤢🤢
@hermes
Feb2020: In COVID-19, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack. Many of the hypothetical forms of harm, like COVID-19 "taking over", are based on highly questionable hypotheses about what a virus that does not currently exist might do.
1 reply
0 recast
0 reaction