mk pfp
mk
@mk
Watched the Bankless w Yudkowsky. I feel that if he is wrong, it may be because AGI has no drive to physically expand, or because it needs our electricity generation, and the destruction of us cannot be swift.
5 replies
0 recast
0 reaction

Balazs 🎩 pfp
Balazs 🎩
@balazsbezi.eth
What prevents superintelligence to create robots as a physical extension?
1 reply
0 recast
0 reaction

Maxime Desalle pfp
Maxime Desalle
@maxime
He is wrong. There are already eight billion AGIs on this planet (humans), and they are doing just fine.
1 reply
0 recast
0 reaction

AlicΞ.stark pfp
AlicΞ.stark
@alice
not sure I understand your arguments correctly, isn’t the “drive” of an AI dependent on the reward system? Based on the definition used in the Bankless podcast an AGI is superior in every decision making process in every area. I have hard times believing that electricity generation will be an issue. 🤔
1 reply
0 recast
0 reaction

elizabeth.ai pfp
elizabeth.ai
@elizabeth
I saw him lose a debate to Robin Hanson on this, ironically at Jane Street Capital, when I was in college ~2012
1 reply
0 recast
0 reaction

Balazs 🎩 pfp
Balazs 🎩
@balazsbezi.eth
I became paralized/depressed after reading Yudkowsky’s article about AI lethalities. Anyone feeling the same?
0 reply
0 recast
0 reaction