Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
AI discourse on twitter be like... It feels like there's a large divergence in perceptions at the moment. Some people saying "OpenAI abandoned AI safety, that's proof AI safety is NGMI", some other people saying "OpenAI abandoned AI safety, that's proof OpenAI is NGMI". And I'm sure OpenAI itself has a different view
19 replies
484 recasts
1869 reactions

MingMing13 (>^_^)>☕️<(^_^<) pfp
MingMing13 (>^_^)>☕️<(^_^<)
@mingming13.eth
What do you think vitalik? AI safety is a lot like nuclear safety imo at this point. The Pandoras box has already been opened, and now all that is left is for hardware to improve and imagination to put it together. And thus Eventually, we'll all evolve into computers.
1 reply
0 recast
9 reactions

behkod.eth 🎩📚🧑‍💻 pfp
behkod.eth 🎩📚🧑‍💻
@behkod.eth
Some say it's the *Safety* that has neither OpenAI nor AI
0 reply
0 recast
5 reactions

Supertaster.degen.eth 🎩 pfp
Supertaster.degen.eth 🎩
@supertaster.eth
What’s your view on AI safety? Should it be regulated more? I’m usually against restrictions but this could get out of hands.
0 reply
0 recast
4 reactions

polarbeartoenail pfp
polarbeartoenail
@polarbeartoenail
🚀 Your cast cashed in! Claim 🎩 DEGEN tokens on jam now! 🟣 https://jam.so/?referrer=ArMi6M 🟣
0 reply
0 recast
3 reactions

notdevin  pfp
notdevin
@notdevin.eth
Engineer: never optimize too early Same engineer: ai safety is everything
0 reply
0 recast
2 reactions

helladj🇺🇸 pfp
helladj🇺🇸
@helladj.eth
ai is diluting social networks - that much is true
0 reply
0 recast
2 reactions

Greg Lang pfp
Greg Lang
@designheretic
More and more I think an open source horse race is the best shot we have at getting a benevolent ASI In a hard-takeoff scenario, having as many takeoffs happen all at once means our singleton has the greatest chance of being not-malicious compared to other competing (realistic) approaches
0 reply
1 recast
0 reaction

Connor McCormick pfp
Connor McCormick
@nor
Do you think Ariele is sincerely worried about Sky resulting in repressive AI regulation?
0 reply
0 recast
1 reaction

jenny.degen 🎩 pfp
jenny.degen 🎩
@cryptojenny
True
0 reply
0 recast
1 reaction

WetSocks💦🧦🎩🍖🧾 pfp
WetSocks💦🧦🎩🍖🧾
@eddweather
Vitalik thank you for being HIMOTHY !
0 reply
0 recast
1 reaction

oliver pfp
oliver
@oliverk120.eth
well.. glad you donated all that money to the ai doomers, since no one else seems to be taking safety seriously anymore..
0 reply
0 recast
1 reaction

Ashish📄 pfp
Ashish📄
@iashish.eth
@mk talk!
0 reply
0 recast
1 reaction

Christian Montoya 🦊 pfp
Christian Montoya 🦊
@m0nt0y4
Discourse for any hot topic is like this. Massive divergence in how anything is interpreted.
0 reply
0 recast
0 reaction

Greg Lang pfp
Greg Lang
@designheretic
The least-viable approach to getting benevolent ASI, on the other hand, is to leave it in the hands of entities like the USG to gatekeep who is allowed to try and what they’re allowed or not allowed to do in the effort In this scenario they’ll commission the MIC to build skynet and then defend their moat with regs
0 reply
0 recast
0 reaction

Greg Lang pfp
Greg Lang
@designheretic
0 reply
0 recast
0 reaction

Never Famous Artists🎩🫂 pfp
Never Famous Artists🎩🫂
@nfamousartists
That happens to all discourse. Each party has his side of the story.
0 reply
0 recast
0 reaction

D (Keeper of Cr00d) 🍖🦖🎩 pfp
D (Keeper of Cr00d) 🍖🦖🎩
@xdtox
The horse is out of the barn. No matter what rules are put in place for good actors that will not stop the prolifera of bad actors. The only option is to push good actors ahead of bad actors at all possible junctures and in every possible way. Folks need to wake up.
0 reply
0 recast
0 reaction

aaron.degen.eth 🤡🍌🔲 pfp
aaron.degen.eth 🤡🍌🔲
@aaronv.eth
the first problem is we went to twitter
0 reply
0 recast
0 reaction

cav4lier pfp
cav4lier
@cav4lier
lambo soon
0 reply
0 recast
0 reaction