Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
42 replies
11 recasts
59 reactions

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
i’m in the all-gas-no-brakes camp but will do my best to steelman: 1. we’re on the cusp of agi 2. agi is like nuclear: extinction-level threat, global superpower-level leverage, and potential civilization expanding resource 3. we dont really know how it works 4. thus, we should proceed with *extreme* caution
4 replies
0 recast
3 reactions

Dan Romero pfp
Dan Romero
@dwr.eth
on point 2, there were no brakes for decades and everything worked out ok?
5 replies
0 recast
1 reaction

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
only if by “no brakes” you mean complete u.s. government control by that definition i’m pretty sure the the ai safety folks would agree with you
1 reply
0 recast
1 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
So the USG control will stop people in other countries from making progress?
3 replies
0 recast
0 reaction

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
what would this list look like if the usg didn’t become the world police post-ww2? https://en.m.wikipedia.org/wiki/List_of_states_with_nuclear_weapons
1 reply
0 recast
1 reaction

Thomas pfp
Thomas
@aviationdoctor.eth
As someone in another country who would prefer that it’s not just left to the control of the USG, I would prefer a UN resolution on what constitutes safe AGI research, with as many relevant states as possible signing the convention (chiefly those that are likely to make breakthroughs).
0 reply
0 recast
1 reaction

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
yes? it more-or-less did with nuclear?
0 reply
0 recast
1 reaction