Content pfp
Content
@
0 reply
0 recast
2 reactions

AusaR pfp
AusaR
@ausar
Stopping an LLM from hallucinating is harder than i thought. I suppose " if (going_to_hallucinate) then return false;" does not work right?
2 replies
0 recast
4 reactions

Nico pfp
Nico
@nicom
llms are always hallucinating. That's how they work. They just tend to hallucinate more accurately sometimes.
1 reply
0 recast
1 reaction

Leeward Bound pfp
Leeward Bound
@leewardbound
unironically, telling it "if unsure, don't hallucinate, say you don't know" actually makes a meaningful impact with many models
0 reply
0 recast
1 reaction