Content pfp
Content
@
0 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
It just occurred to me that @vgr's formulation of AI as "a camera, not an engine" finally helps fill in the "unknown knowns" quadrant of Donald Rumsfeld's (in)famous 2x2 in a satisfying way. https://studio.ribbonfarm.com/p/a-camera-not-an-engine
3 replies
1 recast
5 reactions

necopinus pfp
necopinus
@necopinus.eth
"... [A]s we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know." https://en.wikipedia.org/wiki/There_are_unknown_unknowns
1 reply
0 recast
1 reaction

necopinus pfp
necopinus
@necopinus.eth
That was from a 2002 press conference on the eve of the Iraq War. Rumsfeld is setting up a rather unusual 2x2 here, where *both* axis are "things we know". But 2x2s have four quadrants, and Rumsfeld only describes three of them. What are "unknown knowns"?
1 reply
0 recast
1 reaction

necopinus pfp
necopinus
@necopinus.eth
Slavoj Žižek later tried to fill in the "unknown knowns" quadrant by postulating that it was knowledge that is widely understood but only at a sort of "subconscious" level, as truly surfacing that knowledge would be too destructive to the ego / status quo. https://www.lacan.com/zizekrumsfeld.htm
1 reply
1 recast
2 reactions

necopinus pfp
necopinus
@necopinus.eth
I've always found Žižek's formulation unsatisfying. What Žižek is talking about is a sort of willful ignorance, while Rumsfeld's other cases revolve around *actually not knowing.*
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
So where does @vgr's work fit into this? Well, in "A Camera, Not an Engine", Rao reformulates AI not as a tool for expanding the frontier of knowledge, but rather a way to fill in voids that exist in the space of things we already know. https://studio.ribbonfarm.com/p/a-camera-not-an-engine
1 reply
0 recast
2 reactions

necopinus pfp
necopinus
@necopinus.eth
What AI is doing is "discovering" thoughts / ideas / knowledge that is implied by existing thoughts / ideas / knowledge. "Filling in the blanks" in our existing knowledge space. It's *not* actually expanding the *frontier* of knowledge.
1 reply
0 recast
2 reactions

necopinus pfp
necopinus
@necopinus.eth
(How could an existing AI expand those frontiers? Even with access to the entire Internet, they're still "a ship in a bottle", capable only of sailing the seas or thoughts humans have *already* recorded in some way.)
1 reply
0 recast
1 reaction

necopinus pfp
necopinus
@necopinus.eth
(Also, as an aside, this means that a "discovery" made by an AI is more like a "discovery" in mathematics than a discovery in the physical sciences. Self-contained logical systems are not like the human world, which is a porous, unclear, and — for any one person — finite realm.)
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
Anyways, I think this says something interesting about science in the time of AI. There's been a lot of excitement about using AI to discover new materials or as a tool for hypothesis generation. But really what's happening is that AI is finding new angles on known data sets.
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
Now, finding new angles on old data is important! One of the things that we're rapidly learning is that there's *a lot* of latent space in human knowledge. And filling in that space is going to bring *a lot* of benefits. But it's also, in a very real way, all low-hanging fruit.
2 replies
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
I suspect that what we're going to find is that AI is going to ultimately make weird science moonshots — things that can't really involve AI, because they're looking for truly *new* things — more important.
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
Because weird moonshots are how we expand the frontier. And AI is going to get *very* good at filling in the space *behind* that frontier.
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
This is going to make science *much* more exciting (amazing, horrifying, wonderful new things!), but also *a lot* less efficient (most moonshots fail, and most weird ideas come from the minds of crackpots).
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
Over the course of my lifetime (and stretching before), we've come to focus our research more and more on *application*. But "applicable" research is by definition research behind the frontiers. This is what AI's going to first accelerate, and then cannibalize.
2 replies
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
We're going to need to start investing more in "fundamental science" and "basic R&D" in the AI era, and we’re going to have to do so in a way that encourages more basic *physical* exploration and "far out" ideas.
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
This is going to be hard, because in the short term AI's going to give us *a lot* of returns by filling in the gaps in the explored space of human knowledge — turning our "unknown knowns" into "known knowns". But the *explored* realm of human ideas and knowledge is finite.
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
The process of turning "known unknowns" into "known knowns" is going to be important. But even more important will be the process of turning "unknown unknowns" into "known unknowns". Because only those two processes actually *expand* the frontiers of human ideas and knowledge.
1 reply
0 recast
0 reaction