Content pfp
Content
@
https://warpcast.com/~/channel/p-doom
0 reply
0 recast
0 reaction

assayer pfp
assayer
@assayer
AI Safety Contest (32) I was long time worried that only a major AI disaster, taking millions of lives, can unite us for AI safety. But I may have been too... optimistic. Daniel Faggella argues that the event called "AI Hiroshima" may not lead to global safety agreements. If the media blames an insignificant group of "terrorists," large nations will likely ignore AI safety treaties. It’s will be even tougher when superpowers like the US or China push aggressive AI. Take a look at different scenarios in Daniel's video below. Best comment: 300 degen, 3 mln aicoin, 3k pdoom II award: 150 degen, 1.5 mln aicoin, 1.5k pdoom III award: 70 degen, 750k aicoin, 750 pdoom Deadline: 6.00 pm, ET time tomorrow Monday (26 hours) https://www.youtube.com/watch?v=nXCWC7dqb-I
3 replies
1 recast
3 reactions

Shamim Hoon🥕🎩Ⓜ️ pfp
Shamim Hoon🥕🎩Ⓜ️
@shamimarshad
That's a thought-provoking concern. Daniel Faggella's point about 'AI Hiroshima' and the potential blame-shifting narrative is valid. It's crucial for global leaders to prioritize AI safety agreements and cooperation, rather than letting geopolitical tensions hinder progress.
1 reply
0 recast
1 reaction

Oseworld💥🌠👑 pfp
Oseworld💥🌠👑
@oseworld
Dan Faggella's simulation of different scenarios showcases the different levels of disasters that might occur from different perceived causes of AGI disasters. It is an "IF statement" kinda situation where the response to the disaster will be determined by the perceived cause of the event. This is valid considering the power tussle between the world powers (USA and China). Lately, especially during the new administration of Mr Donald Trump, it seems this power or weapon tussle has shifted from between the USA and Russia to the USA and China. Already, we can notice the competition going on between the two countries on who owns the most advanced AI. So, they are competing against each other to come up with the most advanced AI without making room for AI safety and that is dangerous. Like I suggested last year, let's not wait for when such happens before we start ramping up solutions and uniting together for solutions. We wouldn't want such happening just to prove who is more powerful.
1 reply
0 recast
0 reaction

Gul Bashra Malik🎩Ⓜ️ pfp
Gul Bashra Malik🎩Ⓜ️
@gulbashramalik
A major AI disaster might not unite us it could divide us further. If blame falls on “terrorists” or small actors, big nations may ignore safety treaties. And with powers like the US and China racing for AI dominance, cooperation becomes even harder. Waiting for tragedy isn’t a strategy it’s a risk we can’t affort @assayer
1 reply
0 recast
1 reaction