Content pfp
Content
@
https://warpcast.com/~/channel/thomas
0 reply
0 recast
0 reaction

Thomas pfp
Thomas
@aviationdoctor.eth
@vgr makes an astute observation (as usual). AI is already commoditized because all models are trained on roughly the same dataset of human knowledge; and, therefore, they perform similarly within a segment (broadly: light/free, medium/paid, and advanced/expensive models). It then follows that users will gravitate toward the cheapest or most convenient model within that segment. Now, if AI is fungible, and AI is set to replace human intelligence at a variety of tasks, does that mean that human intelligence is also fungible? I’d respond in the affirmative if and only if humans were also trained on the entirety of human knowledge, but that’s obviously not the case. Our lifespans and brain capacity are too limited for that to happen (yet). So, for now, what differentiates the intelligence of two humans of equal IQ is whatever dataset they trained on — which roughly maps to upbringing, culture, education, and formative experience. 1/2 https://warpcast.com/vgr/0x3053ee4a
3 replies
8 recasts
30 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
but AI is not fully fungible. some AIs are better to some tasks and some are worse, plus there are big differences between the models because of different weights and safety instructions, similar to people btw :)
1 reply
0 recast
1 reaction

Thomas pfp
Thomas
@aviationdoctor.eth
I’d argue otherwise. Within a segment (roughly same number of parameters, context window, etc) they’re broadly interchangeable. I don’t particularly care if I use ChatGPT or Claude or Gemini on a given day, the differences exist but are marginal enough; totally unlike me asking three random people the same question. I believe this to be true for the vast majority of retail use cases, which was @vgr’s point
1 reply
0 recast
0 reaction