Content
@
0 reply
0 recast
0 reaction
sean
@swabbie.eth
Is it just me, or does GPT4 respond better if questioned about content that might contain an error, without pointing out the error? Inevitably, if I accuse it of an error, it gets tied in logic loops much more quickly (flustered?)
1 reply
0 recast
0 reaction
MetaEnd🎩
@metaend.eth
Yep, it's called chain of verification (CoVe) https://paragraph.xyz/@metaend/chain-of-verification-ai-self-check
2 replies
0 recast
1 reaction
christin
@christin
It’s so fascinating how “human-like” behaviors emerge from LLMs 🤔 I have been trying to be kinder to GPT to see if it improves its response. Of course since the memory is erased this is an effort that needs constant repeating, though I wonder how things will change once token limitations are reduced.
1 reply
0 recast
1 reaction
MetaEnd🎩
@metaend.eth
I think that's more an improvement for the human - makes the interaction more natural. I think there is no benefit beyond the placebo effect
1 reply
0 recast
0 reaction
christin
@christin
That’s also fascinating!! I wonder if there are any studies? 🧐
1 reply
0 recast
1 reaction