Content
@
0 reply
0 recast
0 reaction
assayer
@assayer
AI SAFETY COMPETITION III @mxmnr writes about AI as another human invention, like electricity and the internet, when @mattgarcia.eth expects "an earthquake", and a new epoch in the history of humanity, life, and the Cosmos itself. Why are their opinions about AI so different, which one do you agree with? - let me know in the comments. BEST COMMENT - 300 degen II PRIZE - 200 degen III PRIZE - 100 degen I recommend watching Matt's lecture in full, but if you can't, watch only the part: "AI - the evolutionary earthquake" (2 min.) Deadline: 7 pm today, ET time (9 hours) https://warpcast.com/mxmnr/0x13bc4b0c https://warpcast.com/mattgarcia.eth/0xa72d6288
3 replies
1 recast
8 reactions
Gwynne Michele
@thecurioushermit
Can AI even become sentient? Everyone freaks out about AI taking over the world, but that requires it to be able to think beyond a prompt, to recognize itself, to have motives and desires, and can that even happen without a biological drive to survive? We don't even know what consciousness is. Religions have their own ideas. Philosophy and science are still trying to figure it out. Some limit sentience to humans, some to a few other mammals, some to all mammals, some include insects. I'm an animist and include the Universe in the "conscious" category, along with everything in it, but I could be wrong and I'm okay with that. But consciousness - awareness of having an experience - is not the same as the Will to do something other than instinct and/or programming. I'm skeptical as to whether computers can gain that Will and act on their own accord without prompting or programming by humans.
1 reply
0 recast
0 reaction
assayer
@assayer
Thank you very much for taking part in my competition. 10 $degen Let me ask you this: in your opinion, can we safely assume that our anthropological and animal categories like consciousness, desires or biological drive are the sufficient tools to assess the danger of "taking over the world" by an intelligence/being so dramatically different than us like LLM, and their - progressively more alien - successors?
2 replies
0 recast
0 reaction
Gwynne Michele
@thecurioushermit
I do think those are good indicators, but not the only indicators. I'm not worried about an AI "waking up" and taking over the world (even though I'm fascinated by the concept and my too-long-unfinished novel Simple Human is about just that). I'm more worried about AI get siloed into the hands of corrupt people who would use it to take over the world themselves. But every tool has danger of being used in harmful ways. That's why we work on safeguards.
1 reply
0 recast
0 reaction
assayer
@assayer
I understand that with your worldview, where the whole Cosmos is conscious, there is no danger from AI itself without AI "waking up" first. On the other hand, evolutionists like Yuval Harari argue, that AI needs only intelligence - the ability to solve problems - to make a lot of troubles, and maybe even end the "human part of history". Such worldview-based differences are always very hard to resolve...
1 reply
0 recast
0 reaction
Gwynne Michele
@thecurioushermit
Even with my animist worldview, I don't think that AI will suddenly develop an sentience or independent will and motivations to act. The troubles that AI makes won't be some sentient intelligence choosing to act in ways that make trouble, it'll be from humans that use AI to make decisions with impacts the AI doesn't care about because the AI doesn't have emotions, which are biochemical in nature, and rooted in biological survival.
1 reply
0 recast
0 reaction
assayer
@assayer
I remember your argument about AI who needs human motivations and decisions because it "doesn't have emotions, which are biochemical in nature, and rooted in biological survival". Let me say again then: you are saying things which are true when applied to humans and animals. We do not know if those things are also true, when applied to an intelligence/being so dramatically different than us like LLM, and their - progressively more alien - successors.
1 reply
0 recast
0 reaction