๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ

@gm8xx8

100 Following
73655 Followers


๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
1 reply
6 recasts
22 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
7 replies
25 recasts
287 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
You ask me who to follow in AI, but donโ€™t even follow meโ€”after Iโ€™ve spent two years sharing insights and building a presence on Farcaster. Then, thereโ€™s another person who literally copied my following list, recommended others follow me, but when I tried to return the favor? Realized Iโ€™m blocked. I guess they just wanted to give the impression they are in the know. I see why so many AI folks have churned โ˜น๏ธŽ Iโ€™ll still check in and post occasionallyโ€”this isnโ€™t me quitting Farcaster or anything like that, just sharing some ๐Ÿ’ญ Attached below is one example, but Iโ€™ve received plenty of similar messagesโ€”some harsher towards farcaster, all eye-opening, all from AI folks. I also made sure they knew Iโ€™m still here, just not posting as often. My experience hasnโ€™t been all badโ€”many from Farcaster have reached out to say they only know what they do about AI because of me and have shared their gratitude. That support is why I still return.
11 replies
1 recast
18 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
1 reply
0 recast
6 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
2 replies
1 recast
11 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
FAST is a robot action tokenizer that simplifies and speeds up robot training. It enables: > 5x faster training compared to diffusion models. > Compatibility with all tested robot datasets. > Zero-shot performance in new environments, including the DROID dataset, successfully controlling robots in various settings with ease. > Simple autoregressive VLAs that match diffusion VLA performance. > Mixed-data VLA training, allowing integration of non-robot data like web data, subgoals, and video prediction. FAST compresses actions using discrete cosine transform, reducing redundancy and enabling efficient VLA training on high-frequency tasks. It scales to complex robot tasks with simple next-token prediction, converging in days instead of weeks. A pre-trained FAST tokenizer based on 1M robot action sequences is available on Hugging Face, working across various robots and supporting mixed-data VLA training. https://huggingface.co/physical-intelligence/fast
0 reply
4 recasts
8 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
1 reply
3 recasts
18 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
4 replies
12 recasts
26 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
2 replies
5 recasts
16 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
4 replies
18 recasts
24 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
2 replies
0 recast
13 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
0 reply
1 recast
7 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
1 reply
0 recast
5 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
1 reply
3 recasts
7 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
2 replies
0 recast
3 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
0 reply
1 recast
5 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
1 reply
0 recast
7 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
0 reply
0 recast
2 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
rStar-Math shows SLMs can rival or surpass OpenAI o1 in math reasoning w/out distillation from larger models, using MCTS and three keys factors: 1. Code-Augmented CoT Synthesis: MCTS generates verified reasoning data to train policy SLMs. 2. Enhanced PRM: A novel training approach avoids naรฏve annotations, yielding a stronger process preference model (PPM). 3. Self-Evolution Framework: Four rounds of self-evolution refine reasoning with millions of synthesized solutions for 747k problems. Performance Highlights: > Achieves 90.0% on MATH, improving Qwen2.5-Math-7B by +31.2% and surpassing OpenAI o1-preview by +4.5%. > Boosts Phi3-mini-3.8B from 41.4% to 86.4%. > Solves 53.3% of AIME problems, ranking in the top 20% of high school competitors. donโ€™t sleep on small models. https://arxiv.org/abs/2501.04519
1 reply
0 recast
9 reactions

๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ pfp
0 reply
0 recast
6 reactions