๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ

@gm8xx8

140 Following
68921 Followers


๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
LONG LIVE OPEN SOURCE
0 reply
6 recasts
23 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
You ask me who to follow in AI, but donโ€™t even follow meโ€”after Iโ€™ve spent two years sharing insights and building a presence on Farcaster. Then, thereโ€™s another person who literally copied my following list, recommended others follow me, but when I tried to return the favor? Realized Iโ€™m blocked. I guess they just wanted to give the impression they are in the know. I see why so many AI folks have churned โ˜น๏ธŽ Iโ€™ll still check in and post occasionallyโ€”this isnโ€™t me quitting Farcaster or anything like that, just sharing some ๐Ÿ’ญ Attached below is one example, but Iโ€™ve received plenty of similar messagesโ€”some harsher towards farcaster, all eye-opening, all from AI folks. I also made sure they knew Iโ€™m still here, just not posting as often. My experience hasnโ€™t been all badโ€”many from Farcaster have reached out to say they only know what they do about AI because of me and have shared their gratitude. That support is why I still return.
11 replies
1 recast
16 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
https://warpcast.com/gm8xx8/0xb8d6774c
1 reply
0 recast
8 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Physical-Intelligence ฯ€โ‚€ is now open source. > ฯ€โ‚€ and ฯ€โ‚€-FAST Model Weights > source code for model architecture, on-robot Inference, and fine-tuning > pre-trained fine-tuned checkpoints robotics got another boost โœ”๏ธ ๐Ÿ”—: https://github.com/Physical-Intelligence/openpi https://warpcast.com/gm8xx8/0xce446768
2 replies
1 recast
19 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
FAST is a robot action tokenizer that simplifies and speeds up robot training. It enables: > 5x faster training compared to diffusion models. > Compatibility with all tested robot datasets. > Zero-shot performance in new environments, including the DROID dataset, successfully controlling robots in various settings with ease. > Simple autoregressive VLAs that match diffusion VLA performance. > Mixed-data VLA training, allowing integration of non-robot data like web data, subgoals, and video prediction. FAST compresses actions using discrete cosine transform, reducing redundancy and enabling efficient VLA training on high-frequency tasks. It scales to complex robot tasks with simple next-token prediction, converging in days instead of weeks. A pre-trained FAST tokenizer based on 1M robot action sequences is available on Hugging Face, working across various robots and supporting mixed-data VLA training. https://huggingface.co/physical-intelligence/fast
0 reply
4 recasts
19 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
DeepSeek releases Janus-Pro-7B and Janus-Pro-1B ๐Ÿ‹
1 reply
3 recasts
21 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
|๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃ๏ฟฃl | everytime I open this app it l | feels like a step back in l l time. l |๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ๏ผฟ| \ (โ€ขโ—กโ€ข) / \ / โ€”โ€” | | |_ |_
4 replies
12 recasts
42 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Open research and open source accelerate progress by fostering collaboration, transparency, and accessibility. They enable people to build on existing work, reduce redundancy, and solve problems more efficiently. Openness has always been a key driver of innovation and growthโ€”open is the path forward!
2 replies
5 recasts
30 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Looks like theyโ€™re finally paying attention. I still remember when I posted this. It wasnโ€™t a direct response, but I shared it because someone mentioned Chinese models and how far behind they were after referencing a LinkedIn post. Zero engagementโ€”people were too busy copying and pasting everything they saw on X or elsewhere to look like they were in the know, rather than learning from those actually in the trenches.
4 replies
18 recasts
47 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
AI evolution at its finest. Never said, โ€œDonโ€™t sleep on Meta,โ€ but Iโ€™ve been shouting โ€œDonโ€™t sleep on DeepSeekโ€ for well over a year! Itโ€™s incredibly rewarding to see my research and work validated with their latest release.
2 replies
0 recast
13 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
OpenAI released Operator ngl iโ€™ve been quite impressed with it so far but I find the guardrails a bit excessive. https://m.youtube.com/watch?v=CSE77wAdDLg https://openai.com/index/introducing-operator/
0 reply
1 recast
19 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Every time I open Farcaster, it feels like a step back in time.
1 reply
0 recast
6 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Ilya Sutskever talk at NeurIPS 2024 Seq2Seq w/ Neural Networks https://m.youtube.com/watch?v=1yvBqasHLZs
1 reply
3 recasts
11 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Would you consider merging with AI? Would you become part AI?
2 replies
0 recast
4 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
AgiBot World is an open-source dataset for robotic learning with over 1M trajectories from 100+ real-world scenarios, covering tasks like manipulation, tool use, and multi-robot collaboration. https://agibot-world.com
0 reply
1 recast
8 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
Moondream 2025-01-09 Release: Structured Text, Enhanced OCR, Gaze Detection https://moondream.ai/blog/introducing-a-new-moondream-1-9b-and-gpu-support
1 reply
0 recast
8 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
This will all make sense ๐Ÿ”œ
0 reply
0 recast
5 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
rStar-Math shows SLMs can rival or surpass OpenAI o1 in math reasoning w/out distillation from larger models, using MCTS and three keys factors: 1. Code-Augmented CoT Synthesis: MCTS generates verified reasoning data to train policy SLMs. 2. Enhanced PRM: A novel training approach avoids naรฏve annotations, yielding a stronger process preference model (PPM). 3. Self-Evolution Framework: Four rounds of self-evolution refine reasoning with millions of synthesized solutions for 747k problems. Performance Highlights: > Achieves 90.0% on MATH, improving Qwen2.5-Math-7B by +31.2% and surpassing OpenAI o1-preview by +4.5%. > Boosts Phi3-mini-3.8B from 41.4% to 86.4%. > Solves 53.3% of AIME problems, ranking in the top 20% of high school competitors. donโ€™t sleep on small models. https://arxiv.org/abs/2501.04519
1 reply
0 recast
12 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
This model didnโ€™t quite pass my vibe check back in December, so I held off on sharing. That said, thereโ€™s still something to learn from this release, even if itโ€™s not my top pick among SLMs right now.
0 reply
0 recast
6 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
@gm8xx8
A while back, while many of my peers were at NeurIPS, I attended the Humanoid Summit. Being involved in cutting-edge robotics was exactly the reset I needed to stay focused on the ultimate goal. Itโ€™s always inspiringโ€”and a privilegeโ€”to support friends pushing the field forward. Perfect motivation heading into the new year.
1 reply
1 recast
16 reactions