Tarun Chitra pfp
Tarun Chitra
@pinged
Part II: AI Doom 🐰 🕳️ ~~~ You may need to see a clinician for AI Doomer-itis if you have any of the following symptoms: - Hate querying (opposite of vibe coding): Finding queries where the LLM is wrong to make yourself feel better - Thoughts of future generations unable to do an integral without internet
3 replies
7 recasts
55 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
When OpenAI O1 came out, my friend Adam (who works on that team) was freaking out about how the world is over because no one will do contest math/IOI stuff anymore I initially was skeptical — after all, ChatGPT could barely answer my "2^29" interview question without puking — but once it came out I was shocked — I could basically one-shot prompt the model into giving me full proofs for papers I was writing Suddenly, hours of work took 30s and a bit of a salty attitude (I always got better performance when I told the model that the proof better be good enough to get into JAMS/ICML/etc.)
1 reply
0 recast
2 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
This initial realization in October 2024 that things I thought language models could never do — i.e. prove complex properties from very little text — was a real eye opener to me. Yes, getting it to make TikZ or beamer diagrams for me and/or to make sure I get Python/Rust syntax right was *cool* but it didn't feel like higher cognitive power. Yet being able to do full zero-shot learning [theorists have called "oneshotting" zero shot learning for the last 20+ years] 7 years after people wrote abstracts like the following (tl;dr: "zero shot learning is impossible") blew my mind Suddenly, epistemic security seemed to be failing — the model could explain itself without requiring me to understand how it could explain itself. That is, I can figure out when its wrong (like crypto or trading) from how it tells me it reasoned (much like invalidating a ZKP if an aggregation step or Fiat-Shamir fails)
1 reply
0 recast
1 reaction

Tarun Chitra pfp
Tarun Chitra
@pinged
This shook me to my core — I spent years starting in the frequentist philosophy of "anything that you can learn must have an asymptotic theory otherwise you don't know if it exists" to begrudgingly becoming a Bayesian ("who cares about infinite samples?") to accepting reasoning models — where you can accept the "proof of knowledge" generated by the object while being unable to know any possible algorithm for _why_ the proof is correct. There is sense that you can check the proof (e.g. it generates a frequentist proof) but the model itself lacks even Bayesian properties (there is no "prior" for predict next token, only simulation of a context). This felt paradoxical in a manner akin to Russell's Paradox: "the set of explanations generated by a reasoning model cannot itself be described by a reasoning model" and violated every mathematical totem, axiom, and belief I had held near and dear
1 reply
0 recast
2 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
When OpenAI O3-mini came out, I suddenly found myself going from using a language model for (maybe) an hour a week when writing data science code or on-chain data queries to interrogating my prior papers to figure out if I made mistake in proofs or missed some obvious "lay-up" corollaries This is the type of stuff that takes months of labor, often times with multiple research collaborators who you have to get in sync with. Here, I could "wake and query" — wake up with a shower thought ("why does the non-linear Feynman-Kac equation work?") and ask my "collaborator" O3 and get a 90-95% answer from a very low entropy string
1 reply
0 recast
1 reaction

Tarun Chitra pfp
Tarun Chitra
@pinged
While this was amazing (I could basically write papers on my own that previously needed a lot more collaboration) it led to a sense of dread — what, if at all, am I contributing? A few low entropy strings and some imagination? Is that even defensible against these creatures? And so here I was, in my own AI doomer-itis cave, where suddenly this "reasoning paradox" was really an existential question: why do people need researchers anymore? Will people even bother reading papers again? My new salvo became: "you either get cited by the model, or you die in the training data"
1 reply
0 recast
5 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
Another macabre vision of reality that hit me during this time (~ Feb 2025) was the idea that people wouldn't learn to think anymore. One of my favorite, 3 time IMO gold-winning coworkers once told me he only did his PhD to "learn how to think, because you'll never be able to do that twice" But suddenly, by outsourcing the thinking, I'm able to learn to think in far more ways than my limited knowledge base could have envisioned (at least, without spending a lot more time digging, researching, diving, and searching through the vast academic literature) At the same time, I was only able to do that because I "learned how to think" once — will anyone in the future bother to do that? "It took us 10+ years, blood, sweat, and tears, Johnny and you, you youngins' just use neuralink to query without even thinking — you artificially think"
2 replies
0 recast
2 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
The idea of artificial thinking without you having to know how to think and come up with ideas on your own (and, more importantly, fail at executing an idea) felt like a dark reality for humans. Would organic molecules with their limited memory, hysteresis (e.g. one way arrow of time), and energy consumption, ever really be able to compute with something that guzzles uranium the way I drink a sugar-free Red Bull to answer arbitrarily hard questions? And, I could get it to do that *without* having to know how it works? Could I ever have (un)certainty when it was wrong?
1 reply
0 recast
3 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
These kind of, classical I suppose, AI doomer threats were more about the fealty of the value of intellectual skills — it was possible to replace them without the consumer knowing how they work. That is, it violates and destroys the notion of epistemic safety completely! So how do we get out of this rather bleak hole? One way, I convinced myself, was to try to figure out (and solve) a problem that would convince me that I knew _something_ how about such a system works. That way, the last pen stroke to fight against the rule of the GPU could be a scarlet letter: yes, you can do all of this amazing stuff — but at least I know why you're able to do it
1 reply
0 recast
4 reactions