0xmons
@xmon.eth
Thank god we got one sane person here explaining evo psych and how it ties into our sense of exceptionalism You can already see stuff like reward hacking across deepmind and anthropoc research Any extrapolation you make from biology presupposes an incredibly different set of optimization pressures than what we use to develop today's cutting edge systems
2 replies
0 recast
12 reactions
Hammertoesknows 🗣️🎩
@hammertoesknows
This is the best explanation of where we might be headed that I've read: https://ai-2027.com/
2 replies
0 recast
3 reactions
caso
@0xcaso
I don’t like the fact they completely exclude good scenarios, or I missed something?
2 replies
0 recast
1 reaction
0xmons
@xmon.eth
i don't think it's very reasonable to assume that we will get a good scenario by default this makes more sense if u have read previous things by these authors if you think we will have a good scenario by default it's probably worth trying to articulate why (See eg https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq)
1 reply
0 recast
1 reaction
Hammertoesknows 🗣️🎩
@hammertoesknows
I think they are really just articulating one scenario with a whole host of assumptions. It just happens to be the one I’ve read that feels most compelling. To be clear, I hope it’s wrong!
0 reply
0 recast
0 reaction