Content pfp
Content
@
0 reply
0 recast
0 reaction

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models paper: arxiv.org/abs/2411.04905 project page: opencoder-llm.github.io OpenCoder is a family of open and reproducible code language models, available in 1.5B and 8B parameter sizes, supporting both English and Chinese. Pretrained on 2.5 trillion tokens (90% raw code, 10% code-related web data) and fine-tuned with 4.5M high-quality examples, it achieves performance comparable to top-tier models. OpenCoder provides complete model weights, inference code, training data, data processing pipeline, experimental results, and detailed training logs. > OpenCoder releases model weights, inference code, data-cleaning scripts, synthetic data, checkpoints, and over 4.5 million fine-tuning entries, making it one of the most transparent models available. > High-Quality Synthetic Dataโ€”Offers a synthetic data generation process and 4.5 million fine-tuning data entries, providing a strong foundation for training.
1 reply
0 recast
6 reactions

wizard not parzival pfp
wizard not parzival
@alexpaden
welcome back ๐Ÿ‘‘
1 reply
0 recast
2 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
ty. what did I miss?
2 replies
0 recast
0 reaction

wizard not parzival pfp
wizard not parzival
@alexpaden
also https://warpcast.com/~/channel/papers-please
1 reply
0 recast
1 reaction

wizard not parzival pfp
wizard not parzival
@alexpaden
probably just the clanker tokens and shared anon accounts, vitalik joined one yesterday. also everyone is back to wanting to train a small model on the corpus of farcaster data or copy grok was thinking about building an action that puts a twitter thread/context dump into a selected model and outputting a new cast from it
0 reply
0 recast
1 reaction