rina
@rinanicolae
Serious question: I don’t buy the “AI will kill us all” argument. If AI is so smart, wouldn’t it develop a sense of recognition and respect for other highly complex systems like the planet, billions of years of evolution, and recognize the value of human consciousness, emotion, and perception in contributing valuable information that it can’t gather itself? Why do we assume AI will act as a selfish agent rather than a systems thinker? Am I way off base? If AI is so smart, why wouldn’t this be the most likely outcome?
7 replies
1 recast
15 reactions
wiz
@wiz
adding to what @keccers.eth said: - you assume that intelligence and benevolence are related. i’m not sure about that - you assume AI will have agency. what if they don’t reason like we do and just do what they are trained to do (no matter how “smart”)
2 replies
0 recast
2 reactions
keccers
@keccers.eth
Is intelligence related to benevolence would be a great debate between the right thinkers. Would listen to that pod
3 replies
0 recast
4 reactions
rina
@rinanicolae
@wiz I’m not saying intelligence is tied to benevolence I’m saying it could be tied to a recognition of value. And if complexity has value, like human consciousness after billions of years of evolution, or the diversity of life on the planet itself is recognized as valuable, wouldn’t it be a net negative to wipe it out?
2 replies
0 recast
3 reactions
wiz
@wiz
no one knows 🤷 and that’s probably where the fear stems from
1 reply
0 recast
0 reaction
rina
@rinanicolae
Of course, but I still feel like the scenario in which it acts like a selfish human is one of the least likely, if its cognition will be so vastly superior.
2 replies
0 recast
1 reaction
0xmons
@xmon.eth
Cognition is independent from human values tho The risk isn't that it won't understand what humans want, but that it simply doesn't care
0 reply
0 recast
0 reaction
wiz
@wiz
i really hope so! and i hope they are not capable of getting cranky. only time will tell
0 reply
0 recast
0 reaction