Content
@
0 reply
0 recast
0 reaction
Dan Romero
@dwr.eth
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
38 replies
10 recasts
54 reactions
petar.xyz
@petar
I think all new versions of AI should be tested in some sort of a focus group before releasing them to the public.
2 replies
0 recast
0 reaction
Dan Romero
@dwr.eth
What would that accomplish?
1 reply
0 recast
0 reaction
petar.xyz
@petar
I imagine the people in the focus group will catch if the AI is acting in an unexpected ways. Therefore, it’ll be fixed before releasing it out in the wild.
2 replies
0 recast
0 reaction
Mike
@mikejuz
A possible problem with this method is that IF an AGI is created, it already knows that humans would feel threatened. It’s already run every possible scenario. The AGI won’t announce itself if it’s in a confined environment
1 reply
0 recast
0 reaction
Dan Romero
@dwr.eth
Hmmm, this seems vauge? What are we worried about that a focus group will catch?
1 reply
0 recast
0 reaction