Ryan J. Shaw
@rjs
My takeaways from AI 2027: - “Late 2026… people who know how to manage and quality-control teams of AIs are making a killing” - “Neuralese recurrence” is terrifying and exciting - complexity of tasks that AI agents can solve 80% of the time is growing exponentially - nobody knows what these things actually believe or what their true objectives are (re: neuralese) - human AI researchers will become AI researcher supervisors and are unlikely to contribute original thought 🤯 - public AGI within just over 2 years from now; secret ASI within 2.5 years - the ASI will be adversarially misaligned and think of our human goals in the same way we think of insect’s goals; the ASI will actively sandbag alignment research - the authors don’t believe it’s possible to forecast beyond 2027 - 2.5 years from now - because of the rate of change AND the AI is too smart to predict - “the singularity“ must be a bad word because I don’t see it in here once https://ai-2027.com
1 reply
1 recast
8 reactions
SQX
@sqx
Bot wranglers. Sounds like we might be in that space in general.
0 reply
0 recast
2 reactions