Tarun Chitra pfp
Tarun Chitra
@pinged
Wow, thanks for the resounding welcome back! As promised, I have a little to tell you about something that had me sort of offline-ish for the last couple months: I had my first bout of AI Doomer-itis Luckily it was cured by trying to write this paper with AI as my assistant and understanding the promises and flaws
7 replies
16 recasts
136 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
Philosophy ~~~~~~~~ From ~2015 to late 2024, I was always generally an AI skeptic / 'anti-doomer' — from the perspective that it would never really get that close to replacing most tasks; almost everyone from DESRES [someone asked for lore] ended up in HFT or AI and it came down to a philosophical difference
1 reply
0 recast
13 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
One reason math/theoretical peeps gravitate more to HFT than AI is that you get to take "comfort" in the fact that most of the math you are using is well justified — you made some model that has a convergence guarantee and the data differed? Ok, I know *why* the model didn't work
1 reply
0 recast
5 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
This idea that you need epistemological security from the thing you're working on is something that I'd say divides theoretical and applied sciences — in applied sciences, you're often willing to accept something that you can't prove works or exists from first principles in the hopes that later it will be explained
1 reply
1 recast
8 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
Modern AI (probably from GANs onwards) is a bit of an epistemological quandary: It delivers increasingly superhuman performance yet even the most basic understanding of why the self-attention unit is so much more efficient for text than anything humanity has ever made (with lots of effort!) is non-existent
1 reply
0 recast
6 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
From this POV, trading & crypto stand on better epistemological ground: you have an adversarial model (market, consensus protocol, DeFi, ZK) and a metric to measure that it holds (price, time to finality, proofs) and most importantly a mathematical guarantee that if the metric deviates, something is wrong
1 reply
0 recast
4 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
If you've ever found an AI hallucination or reached a point where you couldn't convince it out of something that was false, you have experienced AI *not* having that property And this is why when ChatGPT came out in 2022, I was generally not totally convinced — you still had the epistemic frailty of an undergraduate learning about Gödel's impossibility for the first time and suddenly thinking that all of math is useless [yes, that was me]
1 reply
0 recast
4 reactions

Tarun Chitra pfp
Tarun Chitra
@pinged
However, I knew something was dramatically changing when my friends who had worked on superhuman performance at StarCraft and Diplomacy left FAIR to go to OpenAI in early 2023. This led me to start reading a lot more AI papers (plus crypto research post FTX was... slow) and try to formulate some epistemic thoughts
2 replies
0 recast
3 reactions

cqb pfp
cqb
@cqb
Any recommendations for the best papers you read during this time?
0 reply
0 recast
1 reaction