Connor McCormick ☀️
@nor
After listening to the sam harris + pmarca AI risk chat, I was left a little mystified by some of the arguments pmarca made. Screenshot of what seemed like important yet weak arguments I think I agree with this thread's main criticisms https://twitter.com/liron/status/1679890530485014529?s=20
2 replies
0 recast
3 reactions
𒂭_𒂭
@m-j-r.eth
will give this a listen, but intuitively I do have agree that humans being outcompeted in a microcosm does not generalize to a world that run by AI singularly designed to outcompete humans in a qualifiable way.
1 reply
0 recast
0 reaction
𒂭_𒂭
@m-j-r.eth
admittedly, this conjecture should be debated and explored to a much greater degree, I'm not sold with any of the AI alignment discourse tbqh.
1 reply
0 recast
0 reaction
Connor McCormick ☀️
@nor
can you say more, what are you primarily reading / following re alignment?
1 reply
0 recast
0 reaction
𒂭_𒂭
@m-j-r.eth
I would say it's partly informed by the research coming out. technique has to work its way through scarce inputs (e.g. API cost in FrugalGPT) + the theory that symbolic regression/weaker models (LATM paper) is more ideal after using the most powerful intelligence exclusively for discovery. alignment is an issue,...
1 reply
0 recast
0 reaction