Content
@
0 reply
0 recast
2 reactions
AusaR
@ausar
Stopping an LLM from hallucinating is harder than i thought. I suppose " if (going_to_hallucinate) then return false;" does not work right?
2 replies
0 recast
4 reactions
Leeward Bound
@leewardbound
unironically, telling it "if unsure, don't hallucinate, say you don't know" actually makes a meaningful impact with many models
0 reply
0 recast
1 reaction