Content pfp
Content
@
0 reply
0 recast
0 reaction

July pfp
July
@july
I find that P(doom) has nothing to do with how likely the world is going to end -- (obv no one can predict this with any remotely accurately take) I think it's a metric of how much existential angst that an AI Researcher is feeling on that particular day
7 replies
4 recasts
102 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
What's a hypothesis we could test for, that would be true if your model of AI researchers is correct, and false if their p(doom) values actually are well-considered? eg. one that comes to mind is: in your model, an individual researcher's stated p(doom) should be very volatile week by week. Is that true in practice?
2 replies
0 recast
11 reactions

July pfp
July
@july
A good point. I have spoken to some researchers about how their p(doom) on a good day vs a bad day varies, or simply provide ranges for p(doom) A test that comes to mind: create a self-reporting p(doom) website that you report today's level of p(doom) and a separate one to report your anxiety level, then compare them
1 reply
0 recast
4 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
I suspect that approach is a bit unfair to you, as if you ask people the same question over many days, they start to notice and become artificially more self-consistent! I would be more clever: give people regular "how are you doing?" surveys, and ask each person *only once* (at a random point) what their p(doom) is, and check the correlation. Ideally check the correlation between p(doom) and (how are you doing today) - (how are you doing on average), to isolate short-term mood and cancel out effects where having a high p(doom) itself makes you more depressed (which would be a long-term change in disposition).
1 reply
0 recast
4 reactions