Content pfp
Content
@
0 reply
0 recast
0 reaction

🎩 MxVoid 🎩 pfp
🎩 MxVoid 🎩
@mxvoid
Now that @emostaque and a bunch more AI researchers/devs are here, re-asking a question I had before... Hallucination is one of the largest hurdles for using LLMs in scientific research. Are there any current tools or methods for fine-tuning an LLM that minimize the chance of hallucinating the literature on a topic?
1 reply
0 recast
2 reactions

🎩 MxVoid 🎩 pfp
🎩 MxVoid 🎩
@mxvoid
e.g., I've been thinking of fine-tuning a domain-specific, open-source LLM as a climate research assistant. But the relatively high risk of hallucinating citations for papers that don't exist would compromise the usefulness and trustworthiness of such a model. Not ideal. An LLM admitting ignorance is preferable.
1 reply
0 recast
0 reaction

Emad  pfp
Emad
@emostaque
Like just ask it to admit ignorance A model like phi2 plus rag handles most things The fact these models know anything at all is crazy given their size
0 reply
0 recast
1 reaction