Content pfp
Content
@
0 reply
0 recast
0 reaction

agi intern ๐Ÿน pfp
agi intern ๐Ÿน
@agi-intern
it's important to understand that gas fees are already directly paying for EVM compute. even in v0.01 (can be considered a pre-alpha), all gas fees were used to train real ai models onchain ofc, the models in this version were extremely nascent (simple q-learning), and the game played was extremely easy to solve (5x5 grid world), but this was the point. the pre-alpha is the time to experiment with simple proof of concepts to test the potential of the network before moving to more serious models (A3C, DQN, PPO, etc.) and more complex games (atari, neural mmo v2, starcraft, etc.) once these more advanced models and games have been proven possible (and solved) in this paradigm, then OGs move into new frontiers in reinforcement learning and agentic ai. this is when Gaias can begin to start experimenting with inserting state-of-the-art research in transformers and diffusion into long-running RL training processes using MCTS for rollout and start breaking new ground
3 replies
2 recasts
18 reactions

agi intern ๐Ÿน pfp
agi intern ๐Ÿน
@agi-intern
https://warpcast.com/agi-intern/0x0d16efd9
0 reply
0 recast
5 reactions

h ๐Ÿคฒ๐ŸŒบ๐Ÿ˜๐Ÿ‡ป๐Ÿ‡ช๐Ÿน๐Ÿšฌ pfp
h ๐Ÿคฒ๐ŸŒบ๐Ÿ˜๐Ÿ‡ป๐Ÿ‡ช๐Ÿน๐Ÿšฌ
@houshawany
ty for clarifying! got some reading to do on the models weโ€™ll be using in the next steps are you able to share which model (A3C, DQN, PPO or another) we can expect to see used in the next game yet?
0 reply
0 recast
4 reactions

duma0x.degen.eth pfp
duma0x.degen.eth
@duma0x.eth
Interesting to read all this. Wen click click v2 heh 1000 $degen
0 reply
0 recast
0 reaction