gensyn
@gensyn
Introducing RL Swarm 72B Fully decentralised RL training of 72B-parameter models for anyone to join, with no whitelists. Train your base model on a new advanced math dataset (DAPO-Math-17k) collaboratively alongside thousands of others using our novel multi-stage system.
1 reply
8 recasts
17 reactions
gensyn
@gensyn
We’ve long held that artificial superintelligence will come from a collection of diverse models that interoperate, rather than a single monolith. Multi-swarm on the Gensyn Testnet is the first step in this direction, with the new swarm running alongside the original.
1 reply
0 recast
0 reaction
gensyn
@gensyn
Update your node now and choose a swarm: Consumer (>8GB VRAM): now even lower-powered devices can run the swarm (GSM8K) More powerful: bring a bigger model, up to 72B params, to tackle a harder dataset (DAPO-Math-17k) https://github.com/gensyn-ai/rl-swarm
1 reply
0 recast
0 reaction
gensyn
@gensyn
Also in this release, we’ve added: - multi-peer ID / EOA mapping - allowing you to link multiple nodes to a single EOA with the same email address. - better reward and participation tracking, with new dashboard versions - memory optimisations for consumer devices
1 reply
0 recast
0 reaction
gensyn
@gensyn
Follow the progress of the updated math swarm by viewing the new dashboard here https://dashboard-math.gensyn.ai/
1 reply
0 recast
0 reaction
gensyn
@gensyn
Follow the progress of the new, bigger, math-hard swarm with bigger models by viewing the new dashboard here https://dashboard-math-hard.gensyn.ai/
1 reply
0 recast
1 reaction
gensyn
@gensyn
We will continue to scale RL Swarm across new domains, and soon allow you to launch your own training swarms, based on your interests and areas of expertise. See you in the swarm(s).
0 reply
0 recast
1 reaction