July
@july
A common objection is that AI hallucinate - that it generates false information, misinterprets context, or confidently asserts fabrications. But what if this isn't really about AI at all, what if it is a reflection of who we are. If anything, what if they are our beautiful flaws that makes us deeply human. Also as Daniel Kahneman talks about in Type 1 thinking - ultimately cognitive biases shows that human thought itself is riddled with errors, misremembering, and distortions -- so if we train on that data, how would we not get the same type of behavior? What if instead of hallucinations, we call them happy accidents? (like Bob Ross)
8 replies
3 recasts
49 reactions
Zach
@zd
Yes! One of the biggest reframes I've had in the past year is to think of AI hallucination as a feature, not a bug. What could you build if you realize that all great ideas start as a hallucination until everyone starts to agree with them? When they do, what was once called a "hallucination" starts getting called "truth."
1 reply
0 recast
9 reactions
July
@july
I think about how DNA & RNA mutations in the genetic sequence is garbage most of the time (or even harmful at the cellular or structural level) -- it is a needed side effect as well for the species to evolve. Perhaps at a less grand level AI's "hallucinations" introduce novelty into the cognitive landscape. If AI were too rigid, too perfect, it wouldn’t be able to search for new solutions, just as a species without genetic mutations would eventually stagnate. Also makes me think that of how RL also mirrors biological evolution through trial and error, exploration using reward functions
2 replies
0 recast
3 reactions
Lee
@neverlee
I still think my voice on the voice message is not mine, I don't like how I sound, therefore, it's wrong.
0 reply
0 recast
1 reaction
Zach
@zd
This is an interesting framing that I haven't thought much about There's also something here that reminds me of Harari's explanation of progress being made through stories and Deutsch’s explanation of progress being made through increasingly good explanations It feels like LLMs can produce an infinite number of explanations, and now it's up to us to figure out which ones are “good” in the Deutsch sense of the word
0 reply
0 recast
1 reaction