Juliana
@hym
123 Following
107 Followers
0 reply
1 recast
2 reactions
0 reply
0 recast
0 reaction
3 replies
2 recasts
11 reactions
1 reply
1 recast
2 reactions
0 reply
0 recast
0 reaction
0 reply
0 recast
1 reaction
0 reply
2 recasts
3 reactions
0 reply
0 recast
0 reaction
13 replies
5 recasts
42 reactions
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
1 reply
1 recast
2 reactions
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
15 reactions
13 replies
13 recasts
181 reactions
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
OpenAI’s o1 update enhances reasoning through reinforcement learning, enabling step-by-step problem-solving similar to human thought. The longer it “thinks,” the better it performs, it introduces a new scaling paradigm beyond pretraining. Rather than relying solely on prompting, o1’s chain-of-thought reasoning improves with adaptive compute, which can be scaled at inference time.
- o1 outperforms GPT-4o in reasoning, ranking in the 89th percentile on Codeforces.
- It uses chain-of-thought to break down problems, correct errors, and adapt, though some specifics remain unclear.
- Excels in areas like data analysis, coding, and math.
- o1-preview and o1-mini models are available now, with evals proving it’s not just a one-off improvement. Trusted API users will have access soon.
- Results on AIME and GPQA are strong, with o1 showing significant improvement on complex prompts where GPT-4o struggles.
- The system card (https://openai.com/index/openai-o1-system-card/) showcases o1’s best capabilities. 5 replies
6 recasts
45 reactions
0 reply
0 recast
0 reaction