necopinus pfp
necopinus
@necopinus.eth
Currently reading the MS GPT-4 paper. At the moment I remain… unimpressed… by the model’s “human like” behaviors. That said, I’m starting to think that there may be something profound here. Just not what the authors think it is.
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
IMHO, the versatility of GPT-4 strongly suggests to me that ALL human knowledge domains may use the SAME underlying symbolic system and rules. If true, this means that all human domains involving symbolic manipulation may, in fact, be isomorphic to each other.
2 replies
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
If true, this would be a profound epistemological leap forward: Humans have a single “map” that we have (more-or-less) successfully mapped to EVERY territory that we’ve encountered.
2 replies
0 recast
0 reaction

wake 🎩 pfp
wake 🎩
@wake
Every model works if you use enough epicycles đź«Ł
1 reply
0 recast
0 reaction

necopinus pfp
necopinus
@necopinus.eth
To an extent. But I think the implication I’m groping for here is “why should a single model work this well at all?” TBF, this is kind of the same observation as “The Unreasonable Effectiveness of Mathematics in the Physical Sciences”, but with its generalization kicked up a couple of (significant) notches.
0 reply
0 recast
0 reaction