vaughn tan pfp

vaughn tan

@vt

311 Following
1326 Followers


vaughn tan pfp
0 reply
0 recast
1 reaction

vaughn tan pfp
1 reply
1 recast
3 reactions

vaughn tan pfp
2 replies
0 recast
2 reactions

vaughn tan pfp
1 reply
0 recast
2 reactions

vaughn tan pfp
2 replies
1 recast
5 reactions

vaughn tan pfp
0 reply
0 recast
1 reaction

vaughn tan pfp
1 reply
1 recast
13 reactions

vaughn tan pfp
i've taught sociology, strategy, research methods (UG, masters, PhD, and exec ed) since 2008 in the US, EU, UK, Asia. in every place i've taught, education seems to be in dire straits. i wrote about what the future of education should be here: https://vaughntan.org/meaningedu tl;dr: AI tools are increasingly accessible, cheap, and seem potentially able to produce any output a human can produce — they’ll certainly reconfigure what work looks like. I agree with this read on AI, except for one important difference: Humans can and must do what I call “meaning-making” because AI can’t do it yet. Meaning-making consists of making subjective judgments about the relative value of things. Education at all levels, but especially higher education, has largely abandoned teaching students how to make and justify subjective value judgments. To remain relevant, education must reorient around helping students learn what meaning-making is, and how to do it well.
1 reply
1 recast
5 reactions

vaughn tan pfp
0 reply
0 recast
0 reaction

vaughn tan pfp
1 reply
2 recasts
11 reactions

vaughn tan pfp
0 reply
0 recast
2 reactions

vaughn tan pfp
1 reply
0 recast
0 reaction

vaughn tan pfp
0 reply
0 recast
1 reaction

vaughn tan pfp
1 reply
0 recast
2 reactions

vaughn tan pfp
1 reply
0 recast
2 reactions

vaughn tan pfp
I've been writing for over a decade about why we should be clear about the difference between "uncertainty" and "risk." when we call a situation with unknowns "risky", we automatically apply risk-management ways of thinking when deciding how to act — cost-benefit analysis, expected value analysis etc. these are good when the unknowns can be both precisely AND accurately estimated. most of the important unknowns today aren't accurately quantifiable like that. and if you can't quantify the unknowns accurately these risk-management ways of thinking just don't work. worse, they encourage the optimisation instinct, which is fragile when the situation is actually unquantifiably uncertain geostrategic unknowns from highly capricious state actors are the definition of unquantifiable uncertainty (to take just one example 😑) https://vaughntan.org/notknowing
0 reply
1 recast
1 reaction

vaughn tan pfp
0 reply
0 recast
0 reaction

vaughn tan pfp
0 reply
0 recast
3 reactions

vaughn tan pfp
0 reply
0 recast
1 reaction

vaughn tan pfp
0 reply
0 recast
1 reaction