Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
33 replies
11 recasts
61 reactions

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
The technology is obviously epochal, but its path still largely unknown. It’s appropriate to have at least *some* humility toward “known unknowns” of a historical magnitude.
1 reply
0 recast
8 reactions

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
i’m in the all-gas-no-brakes camp but will do my best to steelman: 1. we’re on the cusp of agi 2. agi is like nuclear: extinction-level threat, global superpower-level leverage, and potential civilization expanding resource 3. we dont really know how it works 4. thus, we should proceed with *extreme* caution
4 replies
0 recast
3 reactions

Ben  🟪 pfp
Ben 🟪
@benersing
At fist automanufacturers made similar arguments against requiring seatbelts in cars. Imagine what we’d be saying about our great/grandparents if that line of thinking had prevailed.
2 replies
0 recast
1 reaction

six pfp
six
@six
if toddlers were able to make an adult human from scratch would they be able to control it?
1 reply
0 recast
1 reaction

petar.xyz pfp
petar.xyz
@petar
I think all new versions of AI should be tested in some sort of a focus group before releasing them to the public.
2 replies
0 recast
0 reaction

wartime art hoe pfp
wartime art hoe
@ivy
a machine looking at the right set of KPI's for humans would reach unfavourable conclusions on what should be done with us
1 reply
0 recast
0 reaction

Choong Ng pfp
Choong Ng
@choong
"AI safety" is mostly a distraction from the real dangers of human bad actors. People and people organized as corporations are more than capable of doing harm at scale with whatever technology is available.
1 reply
0 recast
13 reactions

0xen 🎩 pfp
0xen 🎩
@0xen
if those creepy boston dynamics hell-dogs become sentient it's over
2 replies
2 recasts
14 reactions

GIGAMΞSH pfp
GIGAMΞSH
@gigamesh
I'm in the Sam Harris camp. Intelligence is the critical ingredient for all technology so we should treat AGI like summoning a god. But I'm less concerned than I used to be because I think the safety path is becoming clear: everyone on earth participating in continuous RLHF of the biggest models.
1 reply
0 recast
4 reactions

grin↑ pfp
grin↑
@grin
AI safety is a nuanced discussion that’s very hard to have in cast-length format but if you must: the appropriate balance of novelty and safety is generally good, and denying the need for either is ridiculous
0 reply
0 recast
3 reactions

Joe Blau 🎩 pfp
Joe Blau 🎩
@joeblau
Ilya put it best. Humans love animals, but when we want to build a road between two cities, we don’t ask the animals for permission. AGI will treat us the same way.
1 reply
0 recast
2 reactions

not parzival pfp
not parzival
@shoni.eth
safety != halt // should the government have privy information or should we release everything worldwide tonight
1 reply
0 recast
2 reactions

BBB 👊 pfp
BBB 👊
@jianbing.eth
Naming your next iteration "Q" anything in 2023 shows a worrisome lack of foresight and intelligence.
0 reply
0 recast
1 reaction

Connor McCormick pfp
Connor McCormick
@nor
Ah shoot I missed this! Do you want the most compelling argument about the needs for AI Safety or do you want the most compelling argument about the policies to achieve it?
0 reply
0 recast
1 reaction

fredrik pfp
fredrik
@fredrik
mitigation via give us more time to understand the existential risk possibility for humanity as a whole a small no of tech bros shouldn't be allowed to creat existential risk for human life as we know it on earth
0 reply
0 recast
1 reaction

Gregor pfp
Gregor
@gregor
Incentivize transparency & accountability before gov steps in with club foot. IEs: user passes a short exam to be able to use advanced models. If a prompt tips red flag, user relinquishes privacy in order to continue their task. Opensource safety tools Subsidies for startups building ai safety tools
0 reply
0 recast
1 reaction

Sam Iglesias pfp
Sam Iglesias
@sam
We don’t know what perverse path it might take to maximize its objective function, nor do we have a handle on how to craft its objective function to balance beneficence, human autonomy, and non-maleficence. We sort of see this with recommendation algos already.
1 reply
0 recast
1 reaction

Q🎩 pfp
Q🎩
@qsteak.eth
Once someone can give me the exact process to 100% GUARANTEE a child is raised to NOT be a murderer, then I’ll start to think we might have ANY idea how to control another, possibly superior, intelligence.
0 reply
0 recast
1 reaction

Britt Kim pfp
Britt Kim
@brittkim.eth
i believe governments will use AI for subjugation. for the safety of a free citizenry, we need restrictions on government usage.
0 reply
0 recast
1 reaction