Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction
christopher
@christopher
Crazy how bad ChatGPT hallucinations are still. It's one of the reasons we switched to Claude back in the summer. Now medical AI research is confirming the existential risk of using AI that isn't grounded or factually aligned to science. https://arxiv.org/abs/2503.05777
5 replies
0 recast
16 reactions
Royal
@royalaid.eth
The hallucinations will continue until mechanistic understanding improves
1 reply
0 recast
3 reactions
shoni.eth
@alexpaden
my bad if unwanted: While both "mechanistic understanding" and "mechanistic interpretability" aim to understand the inner workings of complex systems, mechanistic interpretability focuses on reverse-engineering AI models (like neural networks) to uncover the specific mechanisms and causal relationships that drive their behavior. In contrast, "mechanistic understanding" is a broader concept encompassing the understanding of any mechanism, not just AI models
1 reply
0 recast
0 reaction