Connor McCormick βοΈ
@nor
After listening to the sam harris + pmarca AI risk chat, I was left a little mystified by some of the arguments pmarca made. Screenshot of what seemed like important yet weak arguments I think I agree with this thread's main criticisms https://twitter.com/liron/status/1679890530485014529?s=20
2 replies
0 recast
3 reactions
π_π
@m-j-r.eth
will give this a listen, but intuitively I do have agree that humans being outcompeted in a microcosm does not generalize to a world that run by AI singularly designed to outcompete humans in a qualifiable way.
1 reply
0 recast
0 reaction
π_π
@m-j-r.eth
admittedly, this conjecture should be debated and explored to a much greater degree, I'm not sold with any of the AI alignment discourse tbqh.
1 reply
0 recast
0 reaction
Connor McCormick βοΈ
@nor
can you say more, what are you primarily reading / following re alignment?
1 reply
0 recast
0 reaction
π_π
@m-j-r.eth
I would say it's partly informed by the research coming out. technique has to work its way through scarce inputs (e.g. API cost in FrugalGPT) + the theory that symbolic regression/weaker models (LATM paper) is more ideal after using the most powerful intelligence exclusively for discovery. alignment is an issue,...
1 reply
0 recast
0 reaction
π_π
@m-j-r.eth
but the issue is straightforward to address in a system that is frugal and public. if the trust assumption of reporting misalignment is 1/(# of psychopaths in the world + 1) & in the future we'll err on sharing & editing prompts before agentizing those w/ the weakest possible models that do the job, then that's good.
0 reply
0 recast
0 reaction