Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction
christopher
@christopher
Crazy how bad ChatGPT hallucinations are still. It's one of the reasons we switched to Claude back in the summer. Now medical AI research is confirming the existential risk of using AI that isn't grounded or factually aligned to science. https://arxiv.org/abs/2503.05777
5 replies
0 recast
15 reactions
Royal
@royalaid.eth
The hallucinations will continue until mechanistic understanding improves
1 reply
0 recast
3 reactions
shoni.eth
@alexpaden
but yeah another way of viewing this is through society modeling-- gpt will defect from societal norms over time while anthropic will remain very rigid to norms.
0 reply
0 recast
0 reaction
shoni.eth
@alexpaden
i think this problem is generally a result of focused training data not covering these niches as well as coding https://warpcast.com/alexpaden/0x7f3937ef
0 reply
0 recast
0 reaction
Dharmi Kumbhani
@dharmi
I sometimes use ChatGPT just for its hallucinations (if fact sometimes I even explicitly ask it to hallucinate more) It’s been super useful for when ideating or brainstorming newer ideas
0 reply
0 recast
0 reaction
Borg
@alditrus
Wow, Claude near 0 for risk. Impressive!
1 reply
0 recast
0 reaction