Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
41 replies
34 recasts
118 reactions

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
The technology is obviously epochal, but its path still largely unknown. It’s appropriate to have at least *some* humility toward “known unknowns” of a historical magnitude.
1 reply
0 recast
7 reactions

Dan Romero pfp
Dan Romero
@dwr.eth
When has that ever happened in history?
4 replies
0 recast
4 reactions

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
A high level of care was present in the development of nuclear bombs. Eg, intense secrecy and multiple layers of safegaurding before possible deployments This isn't "apples to apples", ofc, bc nukes were "known" danger, whereas AI is "possible" danger But some amount of this humility in makers feels right here, too
1 reply
0 recast
2 reactions

Dan Romero pfp
Dan Romero
@dwr.eth
But nukes moved forward and nothing bad happened between nuclear powers? Instead we got a moral panic about nuclear energy and stagnated?
3 replies
0 recast
1 reaction

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
But nukes were not "radically accelerated". We tightly controlled who could have them, implemented safeguards, etc, and this – along with MAAD – has led to a very beneficial equilibrium. Isn't it possible to believe ^some safety can be good while also believing safety can go too far? (eg, banning nuclear energy)
2 replies
0 recast
2 reactions

danny iskandar pfp
danny iskandar
@daniskandar
who will control what it means to be 'safety'? although hi kind of agree that there should be a level of 'safety' whoever developing the LLM, but those empowerment level of safety as to be decentralized, meaning LLM has to be open source. https://x.com/JosephJacks_/status/1728510229133119644?s=20
0 reply
0 recast
1 reaction

danny iskandar pfp
danny iskandar
@daniskandar
Define safety
1 reply
0 recast
1 reaction