0 reply
0 recast
0 reaction
OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
paper: arxiv.org/abs/2411.04905
project page: opencoder-llm.github.io
OpenCoder is a family of open and reproducible code language models, available in 1.5B and 8B parameter sizes, supporting both English and Chinese. Pretrained on 2.5 trillion tokens (90% raw code, 10% code-related web data) and fine-tuned with 4.5M high-quality examples, it achieves performance comparable to top-tier models. OpenCoder provides complete model weights, inference code, training data, data processing pipeline, experimental results, and detailed training logs.
> OpenCoder releases model weights, inference code, data-cleaning scripts, synthetic data, checkpoints, and over 4.5 million fine-tuning entries, making it one of the most transparent models available.
> High-Quality Synthetic DataโOffers a synthetic data generation process and 4.5 million fine-tuning data entries, providing a strong foundation for training. 1 reply
0 recast
6 reactions
1 reply
0 recast
2 reactions
2 replies
0 recast
0 reaction
1 reply
0 recast
1 reaction
0 reply
0 recast
1 reaction