rina
@rinanicolae
Serious question: I don’t buy the “AI will kill us all” argument. If AI is so smart, wouldn’t it develop a sense of recognition and respect for other highly complex systems like the planet, billions of years of evolution, and recognize the value of human consciousness, emotion, and perception in contributing valuable information that it can’t gather itself? Why do we assume AI will act as a selfish agent rather than a systems thinker? Am I way off base? If AI is so smart, why wouldn’t this be the most likely outcome?
7 replies
1 recast
14 reactions
Nico
@nicom
AI will be rational. No good or bad, just the way it has to be to make things optimized in the short, long and very long term. It will look good or evil for humans because it will make decisions that will affect us directly in a good or bad way, but from its point of view, it's only going to be optimization. Now we can force it to have respect for humans, but if we discuss a really autonomous AI, it should be optimization first. Then comes the topic of sentient AI, which is something else. For this one I would tend to bet on a sentient AI that becomes crazy and kills everyone...
0 reply
0 recast
0 reaction