Content
@
0 reply
0 recast
0 reaction
Dan Romero
@dwr.eth
https://timkellogg.me/blog/2025/01/25/r1
3 replies
6 recasts
45 reactions
hellno the optimist
@hellno.eth
wasnโt aware of this pattern > A long time ago (9 days), there was a prediction that GPT5 exists and that GPT4o is just a distillation of it. This article theorized that OpenAI and Anthropic have found a cycle to keep creating every greater models by training big models and then distilling, and then using the distilled model to create a larger model. Iโd say that the R1 paper largely confirms that thatโs possible (and thus likely to be whatโs happening).
0 reply
1 recast
2 reactions
m_j_r
@m-j-r.eth
here, leaving this here as well https://trite-song-d6a.notion.site/Deepseek-R1-for-Everyone-1860af77bef3806c9db5e5c2a256577d @askgina.eth
0 reply
0 recast
3 reactions
DV
@degenveteran.eth
Linguistics ๐
0 reply
0 recast
0 reaction