Content pfp
Content
@
0 reply
0 recast
0 reaction

assayer pfp
assayer
@assayer
to avoid a deadly danger, to protect someone, to mislead a criminal, in many situations a lie is the right choice but when a machine is ready to make that choice for us, it feels different source: https://x.com/UltraRareAF/status/1835667940156600369
4 replies
0 recast
0 reaction

Sophia Indrajaal pfp
Sophia Indrajaal
@sophia-indrajaal
This is good stuff! We have seen, with that one incident about the trickery in testing I believe you casted about last week that an LLM is (was?) capable of manipulation. If it can lie, as an agent, there could be a lot of danger. It gets real tricky in Ethics. For millenia we have tried to solve this, and we haven't really. But if Machine Intelligence capable of those, it could potentially be capable of a belief system within which to frame its deliberations, it might even be necessary. This is where humanity needs to step up it's game in training. Right now it's simple, guardrails etc. But we have machines training machines, and this is looking to increase. And there is always the potential that with Superintelligence that self awareness, and hence goals emerging from that awareness can develop. Getting a belief system in place, that is somehow better than a lot of people's moral frameworks seems like it needs to be a primary concern given all this.
1 reply
0 recast
1 reaction

assayer pfp
assayer
@assayer
exactly- we need to identify and work with the believe system of the machine, because it is that system that will decide about the truth/lie decisions it is not us anymore 300 $degen
1 reply
0 recast
0 reaction