Content
@
0 reply
0 recast
0 reaction
Dan Romero
@dwr.eth
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
35 replies
34 recasts
122 reactions
Jonny Mack
@nonlinear.eth
i’m in the all-gas-no-brakes camp but will do my best to steelman: 1. we’re on the cusp of agi 2. agi is like nuclear: extinction-level threat, global superpower-level leverage, and potential civilization expanding resource 3. we dont really know how it works 4. thus, we should proceed with *extreme* caution
4 replies
0 recast
2 reactions
tldr (tim reilly)
@tldr
If points 1-3 have any possible validity (most admit they do), then I just don't understand the position of "no brakes" Does it come from believing that, politically speaking, our only actual options are "only brakes" or "no brakes"? Brakes are simply a tool that give us more optionality to pursue our own benefit
2 replies
0 recast
2 reactions
Dan Romero
@dwr.eth
on point 2, there were no brakes for decades and everything worked out ok?
5 replies
0 recast
1 reaction
Alberto Ornaghi
@alor
I would rather trust an AGI (that knows the statistical implication of its actions) than a mad tyrant with his fat finger on the missile launch button. people start wars for no reasons. an AGI will be able to understand better what should not be done.
0 reply
0 recast
0 reaction
wizard not parzival (shoni)
@alexpaden
1. define agi 2. agi is far more powerful than nuclear 3. we see the results of human mind, not how it works 4. this is akin to ai-halting not safety
0 reply
0 recast
0 reaction