Content pfp
Content
@
0 reply
0 recast
0 reaction

Varun Srinivasan pfp
Varun Srinivasan
@v
Is there a credible solution to the LLM hallucination problem? Any interesting research papers or discussions on this?
9 replies
3 recasts
57 reactions

ȷď𝐛𝐛 pfp
ȷď𝐛𝐛
@jenna
Curious whether/how for you “hallucination” distinct from a broad “wrong answer” category (delivered confidently in both cases ofc lol)
1 reply
0 recast
0 reaction

Varun Srinivasan pfp
Varun Srinivasan
@v
hallucination is when it makes up something that it has no source for
1 reply
0 recast
2 reactions

tokenfox pfp
tokenfox
@tokenfox.eth
Well most of the stuff LLMs create has no source?
1 reply
0 recast
0 reaction

ȷď𝐛𝐛 pfp
ȷď𝐛𝐛
@jenna
Yeah this — when I’ve asked about “sources” before, the answer was there really aren’t any. Tools like Perplexity cite sources, which I like… but those aren’t really where it “got” the answer from
1 reply
0 recast
0 reaction

not parzival pfp
not parzival
@shoni.eth
1) it answered wrong with a high degree of confidence 2) the answer was not in training data, i.e a random compilation of the most likely next word solution: 1) lower temps (higher=imaginative) 2) force it to use provided context only
0 reply
0 recast
1 reaction