John Grant
@jlg
118 Following
177 Followers
1 reply
0 recast
0 reaction
"… LLMs are pretty freaking cool. However, they are, by the very nature of their architecture, unreliable narrators, that's what I say politely. If I'm going to be impolite, I will say they allow us to build at global scale bullshit generators, because they are clearly stochastic parrots. They do not reason they do not understand but they do produce some very coherent results because they allow us to navigate a latent space that has been made very complex through training it through the corpus of the internet."
- Grady Booch
This has always seemed rather obvious to me, but I don't find it easy to articulate. An axe can be both a survival tool and a lethal weapon. So, it can be said to be dual-purpose. But I find this analogy not as effective when applied to generative AI. In human systems, what framework or concept best explains this nature of things that can produce both beneficial and harmful effects? Are they examples of emergent properties, or is there a better way to frame it? 1 reply
0 recast
2 reactions
1 reply
0 recast
0 reaction
0 reply
0 recast
2 reactions
0 reply
0 recast
1 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
1 reaction
0 reply
0 recast
1 reaction
0 reply
0 recast
1 reaction
0 reply
0 recast
0 reaction
0 reply
2 recasts
67 reactions
1 reply
0 recast
1 reaction
0 reply
0 recast
0 reaction
0 reply
1 recast
1 reaction
1 reply
0 recast
2 reactions
0 reply
0 recast
1 reaction
0 reply
1 recast
3 reactions
0 reply
0 recast
0 reaction
0 reply
1 recast
3 reactions
0 reply
2 recasts
5 reactions