July pfp
July
@july
A common objection is that AI hallucinate - that it generates false information, misinterprets context, or confidently asserts fabrications. But what if this isn't really about AI at all, what if it is a reflection of who we are. If anything, what if they are our beautiful flaws that makes us deeply human. Also as Daniel Kahneman talks about in Type 1 thinking - ultimately cognitive biases shows that human thought itself is riddled with errors, misremembering, and distortions -- so if we train on that data, how would we not get the same type of behavior? What if instead of hallucinations, we call them happy accidents? (like Bob Ross)
8 replies
3 recasts
49 reactions

𝑶𝒕𝒕𝒊🎩🌊 pfp
𝑶𝒕𝒕𝒊🎩🌊
@toyboy.eth
human thought is far from objective or flawless. We're wired to make mistakes, misremember things, and distort reality to fit our own narratives. So, when we train AI on our data, not a surprise that it would pick up on these same patterns and biases.
0 reply
0 recast
1 reaction