𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp

𝚐π”ͺ𝟾𝚑𝚑𝟾

@gm8xx8

162 Following
131558 Followers


𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
LONG LIVE OPEN SOURCE
0 reply
7 recasts
23 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
1 reply
1 recast
3 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
anyone remember poaster?
0 reply
0 recast
4 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
0 reply
1 recast
7 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
the alpha has been onchain https://zora.co/collect/base:0xe449a4939d51751a62e6e083591def4874cd2f66/11?referrer=0x3ccd522d59a9f273ae026a9254aee589af067a79
0 reply
0 recast
4 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
Qwen2.5-Coder-32B-Instructβ€”+ a whole family of coder models! > Includes models of 0.5B, 1.5B, 3B, 7B, and 14B. Both base and instruct models are available, along w/ quantized versions in GPTQ, AWQ, and GGUF formats. > Qwen2.5-Coder-32B achieves near parity with Claude in coding abilities at 32B parameters. > Licensed under Apache 2.0 and available on HF. cracked! technical report: arxiv.org/abs/2409.12186 blog: http://qwenlm.github.io/blog/qwen2.5-coder-family/ models: https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f
0 reply
2 recasts
19 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
☺︎ https://github.com/google-deepmind/alphafold3
0 reply
0 recast
3 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
Qwen2.5-Coder-32B-Instructβ€”+ a whole family of coder models! > Includes models of 0.5B, 1.5B, 3B, 7B, and 14B. Both base and instruct models are available, along w/ quantized versions in GPTQ, AWQ, and GGUF formats. > Qwen2.5-Coder-32B achieves near parity with Claude in coding abilities at 32B parameters. > Licensed under Apache 2.0 and available on HF. cracked! blog: http://qwenlm.github.io/blog/qwen2.5-coder-family/ models: https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f technical report: https://arxiv.org/abs/2409.12186
0 reply
0 recast
4 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
Meta robotics ☺︎ Meta FAIR has introduced advancements in robotics, supporting progress toward Advanced Machine Intelligence. New releases include Meta Sparsh for touch perception, Digit 360 for detailed tactile sensing, and Meta Digit Plexus, a platform to unify tactile sensors across robotic hands. Meta is also collaborating with GelSight Inc. and Wonik Robotics. Additionally, the PARTNR benchmark, developed on Habitat 3.0, is designed to foster research into more capable and adaptable robots for human-robot collaboration. Simply put, Meta is advancing embodied AI by developing robots with the ability to touch and feel. ↓ https://ai.meta.com/blog/fair-robotics-open-source/?utm_source=twitter&utm_medium=organic_social&utm_content=video&utm_campaign=fair
1 reply
2 recasts
19 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
(sips β˜•οΈ)
0 reply
1 recast
3 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
nice lol but i’ll continue to post papers here or use /gm8xx8 (when I do) They’re not ready for that πŸ’¨ I have too much I go over dailyβ€”and it’s not open. Maybe I’ll start another channel where I can just pop off a bit.
0 reply
0 recast
1 reaction

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
MistralAI Pixtral & Mistral Large Pixtral Large: https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411 - Mistral Commercial License - 124B open-weights multimodal model built on top of Mistral Large 2 - Combines a 123B multimodal decoder w/a 1B parameter vision encoder. - 128K context window supports at least 30 high-resolution images. - SOTA on MathVista, DocVQA, & VQAv2 benchmarks. Mistral Large: https://huggingface.co/mistralai/Mistral-Large-Instruct-2411 - Mistral Research License - Supports dozens of languages, including English, French, German, Chinese, Spanish and more. - Trained on 80+ coding languages like Python, Java, C++, and Bash. - Excels in agent-based and mathematical tasks with native function calling and JSON output. - 128K context window with improved system prompt reliability.
0 reply
1 recast
4 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
ty. what did I miss?
2 replies
0 recast
0 reaction

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
OpenCoder πŸ€— collection https://huggingface.co/collections/infly/opencoder-672cec44bbb86c39910fb55e
0 reply
0 recast
1 reaction

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models paper: arxiv.org/abs/2411.04905 project page: opencoder-llm.github.io OpenCoder is a family of open and reproducible code language models, available in 1.5B and 8B parameter sizes, supporting both English and Chinese. Pretrained on 2.5 trillion tokens (90% raw code, 10% code-related web data) and fine-tuned with 4.5M high-quality examples, it achieves performance comparable to top-tier models. OpenCoder provides complete model weights, inference code, training data, data processing pipeline, experimental results, and detailed training logs. > OpenCoder releases model weights, inference code, data-cleaning scripts, synthetic data, checkpoints, and over 4.5 million fine-tuning entries, making it one of the most transparent models available. > High-Quality Synthetic Dataβ€”Offers a synthetic data generation process and 4.5 million fine-tuning data entries, providing a strong foundation for training.
1 reply
0 recast
6 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
I’ll say it again: don’t sleep on ο£Ώ.
2 replies
0 recast
8 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
1 reply
2 recasts
21 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
0 reply
1 recast
3 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
lol langchain is πŸ—‘οΈ
2 replies
0 recast
5 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
1 reply
0 recast
1 reaction