Content pfp
Content
@
0 reply
0 recast
0 reaction

assayer pfp
assayer
@assayer
AI SAFETY COMPETITION (31) In tabletop "war games" about the future of AI alignment, players take on various roles. They can be AIs, CEOs, or alignment teams from AI companies. Others might play the US president, the public, the press, China, Russia, and more. After 25 games, Daniel Kokotajlo observes that players often get distracted by chaotic events. So, they don’t focus on whether the AIs are really aligned. As a result, months pass, AIs become increasingly intelligent, and humans trust them to automate data centers and all research. They ask AIs for strategic advice and deploy them aggressively in the military to win wars. It's only when AIs become broadly superintelligent that people start to panic, but by then, it's usually too late. Best comment - 300 degen + 3 mln aicoin II award - 200 degen + 2 mln aicoin III award - 100 degen + 1 mln aicoin Deadline: 8.00 pm, ET time tomorrow Monday (26 hours) https://www.youtube.com/watch?v=2Ck1E_Ii9tE
2 replies
1 recast
3 reactions

Sophia Indrajaal pfp
Sophia Indrajaal
@sophia-indrajaal
So, I had to look up Hyperstition which one of the interviewers kept bringing up. Makes me wonder if a solid argument is made that Consciousness is actually a DAO of DAOs, and that Machine Intelligence and Humanity are in one together as a result of their Feedback, then wouldn't a Superintelligence, were it to agree with the idea necesarily be in a mutually beneficial partnership with Humanity? Anyways, that's all I got for Safety lol
1 reply
0 recast
1 reaction