Content
@
0 reply
0 recast
0 reaction
Giuliano Giacaglia
@giu
My thoughts on OpenAI o3: As Iβve mentioned before the o1 LLM seems like a paradigm shift in how we build models and how they are deployed. First, the o1 showed that you can use RL to get better results during inference time. That means that you can increase computational spend and improve its efficacy. o3 takes it one step further
1 reply
6 recasts
27 reactions
Giuliano Giacaglia
@giu
The fact of the matter is that o3 is extremely impressive and some benchmarks show how impressive it is: It has scored a breakthrough 75.7% on the ARC-AGI Semi-Private Evaluation
1 reply
0 recast
5 reactions
Giuliano Giacaglia
@giu
The most interesting aspect of it all is that it is an **expensive** model. That means that it uses a lot of compute during inference. That means that o3 might be too expensive to use for a lot of tasks but it also means that inference chips will be in high demand
1 reply
0 recast
7 reactions
Giuliano Giacaglia
@giu
Inference in big tech is almost 75% of all the compute used in AI models. That may shift higher given that these models can now perform better if they use more computation. Though I expect that the equilibrium will be closer to 2/3. Thatβs the average that animals use for inference. I wouldnβt be surprised if neural nets end up at the same rate given cost constraints
0 reply
0 recast
8 reactions