Tarun Chitra pfp
Tarun Chitra
@pinged
Wow, thanks for the resounding welcome back! As promised, I have a little to tell you about something that had me sort of offline-ish for the last couple months: I had my first bout of AI Doomer-itis Luckily it was cured by trying to write this paper with AI as my assistant and understanding the promises and flaws
7 replies
16 recasts
136 reactions

cqb pfp
cqb
@cqb
How would chain of thought reasoning not being faithful to the model's actual reasoning process affect the framework, if at all? (Apologies if this is an ignorant question)
1 reply
0 recast
1 reaction

Tarun Chitra pfp
Tarun Chitra
@pinged
Not ignorant at all! There’s sort of a sense in which CoT allows models to “back track”; by reprompting, you allow the model to see errors in the original answer and go backwards before making a new prompt. This gets you out of local minima; but it’s not clear _when_ the model can figure out how or when to backtrack (which in some ways, is the mystique of reasoning models)
0 reply
0 recast
1 reaction