julien pfp
julien
@julien51.eth
LLM hallucinations are a feature, not a bug. They show that there is a discrepancy between human interpretations and AI interpretation. (due to biases in the data, faulty learning algorithms... etc). For now, the LLM is more often "wrong", but eventually we will see hallucination where _we_ are wrong.
2 replies
22 recasts
24 reactions

alixkun🟣🎩🍡 pfp
alixkun🟣🎩🍡
@alixkun
I would argue that if this happens, we need a new term for that, and not hallucination, which means "seeing something that doesn't exist". If it actually does exist, then it's something different :)
1 reply
0 recast
0 reaction

julien pfp
julien
@julien51.eth
For sure!
0 reply
0 recast
1 reaction