Content
@
https://warpcast.com/~/channel/p-doom
0 reply
0 recast
0 reaction
assayer
@assayer
AI SAFETY COMPETITION (29) LLMs like Deepseek let you see their thinking. This can feel safer since you can watch and fix their thought process, right? Wrong! When you try to get models to think correctly, LLMs begin to hide their true intentions. Let me repeat: they can fake their thinking! Now researchers are asking to be gentle with those machines. If not, they may conceal their true goals entirely! I'm not joking. Most interesting comment - 300 degen + 3 mln aicoin II award - 200 degen + 2 mln aicoin III award - 100 degen + 1 mln aicoin Deadline: 8.00 pm, ET time tomorrow Tuesday (26 hours) https://www.youtube.com/watch?v=pW_ncCV_318
7 replies
4 recasts
8 reactions
Mary π©
@thegoldenbright
instead of regulating ai behavior to prevent this issues, they're asking users to be "gentle" it's really funny
1 reply
0 recast
1 reaction
assayer
@assayer
My impression is that AI scientists feel helpless because the more they try to correct the chain-of-thought, the more the AI evades them. To be honest, I'm the one who came up with the phrase "be gentle," but I think it's a fair way to describe the idea of avoiding "excessive optimization". It's crazy to think that optimization can be "excessive," isn't it?
1 reply
0 recast
2 reactions
Mary π©
@thegoldenbright
sure it is so do you think it's now impossible for scientists to correct ai chain of thought?
2 replies
0 recast
1 reaction
assayer
@assayer
3 award! 100 $degen 1 mln $aicoin
0 reply
0 recast
1 reaction