Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
41 replies
34 recasts
118 reactions

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
i’m in the all-gas-no-brakes camp but will do my best to steelman: 1. we’re on the cusp of agi 2. agi is like nuclear: extinction-level threat, global superpower-level leverage, and potential civilization expanding resource 3. we dont really know how it works 4. thus, we should proceed with *extreme* caution
4 replies
0 recast
2 reactions

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
If points 1-3 have any possible validity (most admit they do), then I just don't understand the position of "no brakes" Does it come from believing that, politically speaking, our only actual options are "only brakes" or "no brakes"? Brakes are simply a tool that give us more optionality to pursue our own benefit
2 replies
0 recast
2 reactions

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
1. there are many “brakes” (constraints) already in place https://warpcast.com/nonlinear.eth/0xc4528a04 2. pessimistic of state capacity and top-down, command and control approaches 3. optimistic of markets and bottom-up emergence of order 4. potential upside worth the downside risk
1 reply
0 recast
1 reaction

notdevin  pfp
notdevin
@notdevin.eth
Team no brakes because I’ve seen no credible arguments for it being around the corner. I got there from listening and reading to the researchers for the past 13 years and staying current in the space. It seems to be a minority of credible researchers hold this view and their arguments are full of logical fallacies
1 reply
0 recast
1 reaction