Content
@
0 reply
0 recast
0 reaction
July
@july
I find that P(doom) has nothing to do with how likely the world is going to end -- (obv no one can predict this with any remotely accurately take) I think it's a metric of how much existential angst that an AI Researcher is feeling on that particular day
7 replies
4 recasts
102 reactions
Vitalik Buterin
@vitalik.eth
What's a hypothesis we could test for, that would be true if your model of AI researchers is correct, and false if their p(doom) values actually are well-considered? eg. one that comes to mind is: in your model, an individual researcher's stated p(doom) should be very volatile week by week. Is that true in practice?
2 replies
0 recast
11 reactions
July
@july
A good point. I have spoken to some researchers about how their p(doom) on a good day vs a bad day varies, or simply provide ranges for p(doom) A test that comes to mind: create a self-reporting p(doom) website that you report today's level of p(doom) and a separate one to report your anxiety level, then compare them
1 reply
0 recast
4 reactions
Eddie Wharton
@eddie
I think also influenced by researcher temperament That hypothesis predicts systematic variance in researchers’ forecasts for unrelated negative things, which is testable
0 reply
0 recast
1 reaction