Kaushik Andra
@kashout
What if LLM hallucinations aren’t a bug, but a feature? Are the answers to open scientific problems simply stored as deep hallucinations inside LLM’s and we need to design clever prompts to retrieve them?
0 reply
0 recast
0 reaction