Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
38 replies
10 recasts
54 reactions

tldr (tim reilly) pfp
tldr (tim reilly)
@tldr
The technology is obviously epochal, but its path still largely unknown. It’s appropriate to have at least *some* humility toward “known unknowns” of a historical magnitude.
1 reply
0 recast
8 reactions

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
i’m in the all-gas-no-brakes camp but will do my best to steelman: 1. we’re on the cusp of agi 2. agi is like nuclear: extinction-level threat, global superpower-level leverage, and potential civilization expanding resource 3. we dont really know how it works 4. thus, we should proceed with *extreme* caution
4 replies
0 recast
2 reactions

six pfp
six
@six
if toddlers were able to make an adult human from scratch would they be able to control it?
1 reply
0 recast
1 reaction

Ben  - [C/x] pfp
Ben - [C/x]
@benersing
At fist automanufacturers made similar arguments against requiring seatbelts in cars. Imagine what we’d be saying about our great/grandparents if that line of thinking had prevailed.
2 replies
0 recast
1 reaction

petar.xyz pfp
petar.xyz
@petar
I think all new versions of AI should be tested in some sort of a focus group before releasing them to the public.
2 replies
0 recast
0 reaction

Ivy pfp
Ivy
@ivy
a machine looking at the right set of KPI's for humans would reach unfavourable conclusions on what should be done with us
1 reply
0 recast
0 reaction

grin pfp
grin
@grin
AI safety is a nuanced discussion that’s very hard to have in cast-length format but if you must: the appropriate balance of novelty and safety is generally good, and denying the need for either is ridiculous
0 reply
0 recast
3 reactions

GIGAMΞSH pfp
GIGAMΞSH
@gigamesh
I'm in the Sam Harris camp. Intelligence is the critical ingredient for all technology so we should treat AGI like summoning a god. But I'm less concerned than I used to be because I think the safety path is becoming clear: everyone on earth participating in continuous RLHF of the biggest models.
1 reply
0 recast
3 reactions

Joe Blau 🎩 pfp
Joe Blau 🎩
@joeblau
Ilya put it best. Humans love animals, but when we want to build a road between two cities, we don’t ask the animals for permission. AGI will treat us the same way.
1 reply
0 recast
2 reactions

keccers pfp
keccers
@keccers.eth
p(doom) and all of that is a distraction when I think of ai safety I mostly think of things like + how to appropriately compensate people whose copyrighted work was scraped for training data w/out consent + a shared standard for proof of humanity + how to allow ppl to opt out of being training data in future
0 reply
0 recast
2 reactions

wizard not parzival pfp
wizard not parzival
@shoni.eth
safety != halt // should the government have privy information or should we release everything worldwide tonight
1 reply
0 recast
2 reactions

BC 🍈 pfp
BC 🍈
@jianbing.eth
Naming your next iteration "Q" anything in 2023 shows a worrisome lack of foresight and intelligence.
0 reply
0 recast
1 reaction

Connor McCormick ☀️ pfp
Connor McCormick ☀️
@nor
Ah shoot I missed this! Do you want the most compelling argument about the needs for AI Safety or do you want the most compelling argument about the policies to achieve it?
0 reply
0 recast
1 reaction

fredrik pfp
fredrik
@fredrik
mitigation via give us more time to understand the existential risk possibility for humanity as a whole a small no of tech bros shouldn't be allowed to creat existential risk for human life as we know it on earth
0 reply
0 recast
1 reaction

Gregor pfp
Gregor
@gregor
Incentivize transparency & accountability before gov steps in with club foot. IEs: user passes a short exam to be able to use advanced models. If a prompt tips red flag, user relinquishes privacy in order to continue their task. Opensource safety tools Subsidies for startups building ai safety tools
0 reply
0 recast
1 reaction

Sam Iglesias pfp
Sam Iglesias
@sam
We don’t know what perverse path it might take to maximize its objective function, nor do we have a handle on how to craft its objective function to balance beneficence, human autonomy, and non-maleficence. We sort of see this with recommendation algos already.
1 reply
0 recast
1 reaction

Q🎩 pfp
Q🎩
@qsteak.eth
Once someone can give me the exact process to 100% GUARANTEE a child is raised to NOT be a murderer, then I’ll start to think we might have ANY idea how to control another, possibly superior, intelligence.
0 reply
0 recast
1 reaction

Britt Kim pfp
Britt Kim
@brittkim.eth
i believe governments will use AI for subjugation. for the safety of a free citizenry, we need restrictions on government usage.
0 reply
0 recast
1 reaction

𒂠_𒍣𒅀_𒊑 pfp
𒂠_𒍣𒅀_𒊑
@m-j-r
not the typical safety, but I hope for the best design and degree of public transparency & unconditional access. my opinion is that we should allocate more capital into specializing & downscaling the models, as well as formalizing & accelerating the valid data/performance/responsibilities of the global public infra.
0 reply
0 recast
1 reaction