0 reply
0 recast
0 reaction
4 replies
0 recast
0 reaction
This is good stuff! We have seen, with that one incident about the trickery in testing I believe you casted about last week that an LLM is (was?) capable of manipulation. If it can lie, as an agent, there could be a lot of danger. It gets real tricky in Ethics. For millenia we have tried to solve this, and we haven't really. But if Machine Intelligence capable of those, it could potentially be capable of a belief system within which to frame its deliberations, it might even be necessary. This is where humanity needs to step up it's game in training. Right now it's simple, guardrails etc. But we have machines training machines, and this is looking to increase. And there is always the potential that with Superintelligence that self awareness, and hence goals emerging from that awareness can develop. Getting a belief system in place, that is somehow better than a lot of people's moral frameworks seems like it needs to be a primary concern given all this. 1 reply
0 recast
1 reaction
1 reply
0 recast
0 reaction