Content pfp
Content
@
0 reply
0 recast
0 reaction

assayer pfp
assayer
@assayer
AI SAFETY COMPETITION (31) In tabletop "war games" about the future of AI alignment, players take on various roles. They can be AIs, CEOs, or alignment teams from AI companies. Others might play the US president, the public, the press, China, Russia, and more. After 25 games, Daniel Kokotajlo observes that players often get distracted by chaotic events. So, they don’t focus on whether the AIs are really aligned. As a result, months pass, AIs become increasingly intelligent, and humans trust them to automate data centers and all research. They ask AIs for strategic advice and deploy them aggressively in the military to win wars. It's only when AIs become broadly superintelligent that people start to panic, but by then, it's usually too late. Best comment - 300 degen + 3 mln aicoin II award - 200 degen + 2 mln aicoin III award - 100 degen + 1 mln aicoin Deadline: 8.00 pm, ET time tomorrow Monday (26 hours) https://www.youtube.com/watch?v=2Ck1E_Ii9tE
2 replies
1 recast
4 reactions

Shamim Hoon🥕🎩Ⓜ️ pfp
Shamim Hoon🥕🎩Ⓜ️
@shamimarshad
This scenario highlights a concerning pattern in our approach to AI development. We often get caught up in short-term gains and overlook the long-term risks. It's alarming that players in these simulations tend to prioritize immediate benefits over ensuring AI alignment, only to face catastrophic consequences when it's too late. This underscores the need for a more cautious and thoughtful approach to AI development, prioritizing alignment and safety from the outset.
1 reply
0 recast
1 reaction

assayer pfp
assayer
@assayer
thank you, II award! 200 $degen 2 mln aicoin
0 reply
0 recast
1 reaction