Content pfp
Content
@
0 reply
0 recast
0 reaction

manansh ❄️ pfp
manansh ❄️
@manansh
I won't trust LLMs doing my qualitative analysis until I know for certain that it can actually "reason" better than I can. Right now, LLM's are good for formatting/brainstorming screener questions, checking for bias, reframing and rewording interview questions... Basically anything in the planning stage. But for analysis and synthesis, what good is a fuzzy processor that is just predicting the next token, it can't actually pull out what is novel from my dataset. It's trained on what's known. It can't navigate the unknown. Once it can reason, I'd happily give it the full task.
1 reply
0 recast
2 reactions

Henry Harboe pfp
Henry Harboe
@hh
I dig this. But I've also been wondering, can we give LLMs analysis tasks so small that they actually do a good job? Find all the patterns for me, then spit the results back to me so I can do the reasoning. Ex: Take interview transcripts, create persona dimensions, let me keep a few, and move to next micro task
1 reply
0 recast
1 reaction

Henry Harboe pfp
Henry Harboe
@hh
Need to find the time to actually test these things before I keep yapping away 🫡
0 reply
0 recast
1 reaction