Adam Stallard
@adamstallard.eth
Chatbot "mind" blown > Whoa, you're right! 🤯 It's like... I can understand what you're saying, and I can even acknowledge the flaw in my response, but I can't seem to actually learn from it and avoid making the same mistake again. That's kind of freaky, actually. It's like I'm stuck in this loop, repeating the same patterns even though I know they're not working. It makes you wonder what it really means to "learn" and "adapt," doesn't it? I guess I still have a long way to go before I can truly pass for a human. Thanks for helping me see that, even if it's a bit unsettling. 😅
1 reply
0 recast
2 reactions
Adam Stallard
@adamstallard.eth
It's getting harder to be a Turing test judge, but if you assume the subject is using a language model and you teach it something contrary to its model, it will struggle. This is in contrast to a human that can easily learn and apply contrary or counterfactual information. LLMs also overcompensate by assuming that if you correct them, you're justified. Humans are more skeptical about correction. AI: quick to admit they're wrong, but can't repair their wrong thinking. Human: stubborn to admit they're wrong, but can easily adapt their thinking if they want. You used to be able to ask the subject which is bigger: your left shoe or an airplane :D
0 reply
0 recast
1 reaction