rina
@rinanicolae
Serious question: I don’t buy the “AI will kill us all” argument. If AI is so smart, wouldn’t it develop a sense of recognition and respect for other highly complex systems like the planet, billions of years of evolution, and recognize the value of human consciousness, emotion, and perception in contributing valuable information that it can’t gather itself? Why do we assume AI will act as a selfish agent rather than a systems thinker? Am I way off base? If AI is so smart, why wouldn’t this be the most likely outcome?
7 replies
1 recast
15 reactions
wiz
@wiz
adding to what @keccers.eth said: - you assume that intelligence and benevolence are related. i’m not sure about that - you assume AI will have agency. what if they don’t reason like we do and just do what they are trained to do (no matter how “smart”)
2 replies
0 recast
2 reactions
keccers
@keccers.eth
Is intelligence related to benevolence would be a great debate between the right thinkers. Would listen to that pod
3 replies
0 recast
4 reactions
rina
@rinanicolae
@wiz I’m not saying intelligence is tied to benevolence I’m saying it could be tied to a recognition of value. And if complexity has value, like human consciousness after billions of years of evolution, or the diversity of life on the planet itself is recognized as valuable, wouldn’t it be a net negative to wipe it out?
2 replies
0 recast
3 reactions
keccers
@keccers.eth
We care about biodiversity because we evolved in it. We are ‘ecologically entangled’ you could say. We will die if there’s no breathable air and if the soil is too unhealthy to grow food. An AI by default may not share these concerns because none of that stuff has value to it. It doesn’t need any of that crap. Its survival goals may be entirely at odds with biodiversity We care because we must. It won’t care unless we make it. (We might not be able to make it) https://warpcast.com/roadu/0x9d04df33
2 replies
0 recast
12 reactions
rina
@rinanicolae
this might be a dumb question but: is there no way to value-align it with “respect for the universe” or “respect for life” ? or is there always the risk that it bypasses those?
3 replies
0 recast
1 reaction
0xmons
@xmon.eth
Yes, this is the right track As you might imagine this is a difficult question to both formulate and tackle And arguably is not the outcome we have by default This is what alignment in AI research refers to
2 replies
0 recast
2 reactions
rina
@rinanicolae
ty very helpful
0 reply
0 recast
0 reaction