Content pfp
Content
@
0 reply
0 recast
0 reaction

Varun Srinivasan pfp
Varun Srinivasan
@v
Is there a credible solution to the LLM hallucination problem? Any interesting research papers or discussions on this?
9 replies
2 recasts
47 reactions

Shashank  pfp
Shashank
@0xshash
amazon recommends using RAG on knowledge sources to get a hallucination score https://aws.amazon.com/blogs/machine-learning/reducing-hallucinations-in-large-language-models-with-custom-intervention-using-amazon-bedrock-agents/ research paper: https://www.amazon.science/publications/hallucination-detection-in-llm-enriched-product-listings
0 reply
0 recast
0 reaction