0xmons
@xmon.eth
Thank god we got one sane person here explaining evo psych and how it ties into our sense of exceptionalism You can already see stuff like reward hacking across deepmind and anthropoc research Any extrapolation you make from biology presupposes an incredibly different set of optimization pressures than what we use to develop today's cutting edge systems
2 replies
0 recast
11 reactions
Hammertoesknows š£ļøš©
@hammertoesknows
This is the best explanation of where we might be headed that I've read: https://ai-2027.com/
2 replies
0 recast
3 reactions
caso
@0xcaso
I donāt like the fact they completely exclude good scenarios, or I missed something?
2 replies
0 recast
1 reaction
0xmons
@xmon.eth
i don't think it's very reasonable to assume that we will get a good scenario by default this makes more sense if u have read previous things by these authors if you think we will have a good scenario by default it's probably worth trying to articulate why (See eg https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq)
1 reply
0 recast
1 reaction
0xmons
@xmon.eth
See also the entire thread here which prompted this https://warpcast.com/xmon.eth/0xf37c7258
1 reply
0 recast
1 reaction
caso
@0xcaso
thanks man, Iām not an expert but I wanted to start digging a bit deeper, Iāll start with the stuff you shared still yea, I agree that itās not reasonable to include the good case by default⦠but itās ok to consider it maybe? or itās this dangerous that it doesnāt make sense to consider it at all?
1 reply
0 recast
0 reaction