Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
https://timkellogg.me/blog/2025/01/25/r1
3 replies
6 recasts
45 reactions

hellno the optimist pfp
hellno the optimist
@hellno.eth
wasnโ€™t aware of this pattern > A long time ago (9 days), there was a prediction that GPT5 exists and that GPT4o is just a distillation of it. This article theorized that OpenAI and Anthropic have found a cycle to keep creating every greater models by training big models and then distilling, and then using the distilled model to create a larger model. Iโ€™d say that the R1 paper largely confirms that thatโ€™s possible (and thus likely to be whatโ€™s happening).
0 reply
1 recast
2 reactions

m_j_r pfp
m_j_r
@m-j-r.eth
here, leaving this here as well https://trite-song-d6a.notion.site/Deepseek-R1-for-Everyone-1860af77bef3806c9db5e5c2a256577d @askgina.eth
0 reply
0 recast
3 reactions

DV pfp
DV
@degenveteran.eth
Linguistics ๐Ÿ˜…
0 reply
0 recast
0 reaction