Content pfp
Content
@
0 reply
0 recast
0 reaction

July pfp
July
@july
a crazy idea: the fact that most (if not close to all) models, papers, are all going to be obsolete in a few years even though they feel so new today
7 replies
2 recasts
22 reactions

Kyle Mathews pfp
Kyle Mathews
@kam
well or they won't. Progress isn't guaranteed without new conceptual breakthroughs. I don't really see any improvements coming until models get embodied (i.e. exist as robots so can learn from reality)
1 reply
0 recast
1 reaction

Ben  🟪 pfp
Ben 🟪
@benersing
So you think we’re near the end of this cycle?
1 reply
0 recast
1 reaction

Kyle Mathews pfp
Kyle Mathews
@kam
quite possibly. All the new models released in the last year have been somewhere between gpt 3 & 4. Lots to do w/ evals, fine-tuning, training on propritary data, efficient inference, etc. but "intelligence" of models seems like it's topped out — see https://warpcast.com/kam/0x1dd60ed3 & linked @vgr post
1 reply
0 recast
0 reaction

July pfp
July
@july
Agree with you on this, we’ll probably need a step change akin to CNNs did for deep learning that transformers have done for LLMs
2 replies
0 recast
1 reaction

Ben  🟪 pfp
Ben 🟪
@benersing
What role, if any, do you see quantum playing in this?
0 reply
0 recast
0 reaction