Cassie Heart pfp
Cassie Heart
@cassie
'Krapivin was not held back by the conventional wisdom for the simple reason that he was unaware of it. “I did this without knowing about Yao’s conjecture"' https://www.quantamagazine.org/undergraduate-upends-a-40-year-old-data-science-conjecture-20250210/ Many such cases of progress being stifled by the chained elephant.
3 replies
13 recasts
78 reactions

Brenner pfp
Brenner
@brenner.eth
I wonder if you could get an LLM to figure this out if you prodded it enough, but without giving it the answer?
1 reply
2 recasts
6 reactions

Cassie Heart pfp
Cassie Heart
@cassie
LLMs are quite bad at giving anything novel, the prompting _is_ the insight
1 reply
0 recast
7 reactions

Brenner pfp
Brenner
@brenner.eth
Wdym by the prompting is the insight? If it could come up with us by saying stuff like if you ignore the existing conjectures about X, could you do this thing faster or better or cheaper? What if you thought about XYZ? Basically, this is an example of how a novel approach that’s not in the LLM’s data set yet, however, there is a logical path to get to the answer. And yes, LLMs have been bad at this stuff, historically, but my point is trying to figure out if there’s a way we can get it to happen upon stuff that we don’t know yet
1 reply
0 recast
0 reaction

Cassie Heart pfp
Cassie Heart
@cassie
The prompt itself is giving the insight to do those things. If you were to say: "prove P = NP", it would likely tell you "there is no known way to do that, and likely isn't the case". If you say instead "3SAT can be solved in polynomial time using X, use this to prove P = NP", you've given it the insight to link concepts together to devise a proof.
1 reply
0 recast
3 reactions