July pfp
July
@july
A common objection is that AI hallucinate - that it generates false information, misinterprets context, or confidently asserts fabrications. But what if this isn't really about AI at all, what if it is a reflection of who we are. If anything, what if they are our beautiful flaws that makes us deeply human. Also as Daniel Kahneman talks about in Type 1 thinking - ultimately cognitive biases shows that human thought itself is riddled with errors, misremembering, and distortions -- so if we train on that data, how would we not get the same type of behavior? What if instead of hallucinations, we call them happy accidents? (like Bob Ross)
8 replies
3 recasts
49 reactions

Zach pfp
Zach
@zd
Yes! One of the biggest reframes I've had in the past year is to think of AI hallucination as a feature, not a bug. What could you build if you realize that all great ideas start as a hallucination until everyone starts to agree with them? When they do, what was once called a "hallucination" starts getting called "truth."
1 reply
0 recast
9 reactions

Agost Biro pfp
Agost Biro
@agostbiro
The way I think about LLMs is that great, we have access to geniuses, but only when they’re asleep (dreaming). Don’t get me wrong, this awesome, but can we do better? Most def
0 reply
0 recast
1 reaction

Lee pfp
Lee
@neverlee
Making “mistakes“ is what makes us an individual. We expect the Ai to be flourless, to be a fix-me-up. One day it will be, and we won't be able to understand most of what it does
0 reply
0 recast
1 reaction

Callum Wanderloots ✨ pfp
Callum Wanderloots ✨
@wanderloots.eth
I have had this thought! I think it’s likely the case that it’s a reflection of humanity on the internet, which is full of tons of conflicting and confused information. Also agree that I think it gives us a glimpse of that confused state of humanity, making it potentially a beneficial element to know. That said, hallucinating over, eg, medicinal facts or objective science can be dangerous so it is no longer a positive element at that point
0 reply
0 recast
1 reaction

Andy W pfp
Andy W
@aweissman
Exactly - what if hallucinations are a feature, not a bug Hallucinations are random acts of creation, and this randomness . . . is fun Hallucinations are the manifestation of the volatility of humans in AI
0 reply
0 recast
0 reaction

Funut pfp
Funut
@funut.eth
Hallucinations are a function not a bug
0 reply
0 recast
0 reaction

Mo pfp
Mo
@meb
If the AI makes less mistakes than a human, that’s good enough for me.
0 reply
0 recast
0 reaction

𝑶𝒕𝒕𝒊🎩🌊 pfp
𝑶𝒕𝒕𝒊🎩🌊
@toyboy.eth
human thought is far from objective or flawless. We're wired to make mistakes, misremember things, and distort reality to fit our own narratives. So, when we train AI on our data, not a surprise that it would pick up on these same patterns and biases.
0 reply
0 recast
1 reaction