Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
41 replies
34 recasts
118 reactions

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
i’m in the all-gas-no-brakes camp but will do my best to steelman: 1. we’re on the cusp of agi 2. agi is like nuclear: extinction-level threat, global superpower-level leverage, and potential civilization expanding resource 3. we dont really know how it works 4. thus, we should proceed with *extreme* caution
4 replies
0 recast
2 reactions

Dan Romero pfp
Dan Romero
@dwr.eth
on point 2, there were no brakes for decades and everything worked out ok?
4 replies
0 recast
1 reaction

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
only if by “no brakes” you mean complete u.s. government control by that definition i’m pretty sure the the ai safety folks would agree with you
1 reply
0 recast
0 reaction

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
For decades we were not anywhere close to important thresholds for how much actual compute is available relative to how much is in our brains. Now we are.
1 reply
2 recasts
52 reactions

christopher pfp
christopher
@christopher
except for a few incidents where we feared the extinction of human and biological existence, yeah
0 reply
0 recast
0 reaction

Mike pfp
Mike
@mikejuz
Enriching enough uranium for a bomb is a lot harder then gathering GPUs Also, people understood that; big bomb = bad People don’t seem to care/understand the risks of non-aligned AGI
0 reply
0 recast
0 reaction