julien pfp
julien
@julien51.eth
LLM hallucinations are a feature, not a bug. They show that there is a discrepancy between human interpretations and AI interpretation. (due to biases in the data, faulty learning algorithms... etc). For now, the LLM is more often "wrong", but eventually we will see hallucination where _we_ are wrong.
2 replies
22 recasts
24 reactions

alixkun🟣🎩🍡 pfp
alixkun🟣🎩🍡
@alixkun
I would argue that if this happens, we need a new term for that, and not hallucination, which means "seeing something that doesn't exist". If it actually does exist, then it's something different :)
1 reply
0 recast
0 reaction

Michael of St. Joseph 🔏 pfp
Michael of St. Joseph 🔏
@michaelofstjoe
I thought it's because LLMs *can't* interpret data that they hallucinate. They have no condition to admit "I don't know" and therefore must answer even if the answer is an invention.
0 reply
0 recast
1 reaction