Content
@
https://opensea.io/collection/dev-21
0 reply
0 recast
2 reactions
Korede
@korede
We should only start getting scared of dev AI tools when companies start training models from first principles a la AlphaZero (i.e. handing them the docs and keeping them running until they figure out *everything*). Depending on existing code will continue to produce the limitations of existing code. But maybe this is only possible in a post-quantum world.
1 reply
0 recast
0 reaction
Timi
@timigod.eth
1. Existing code is pretty great. So much can be built with it. 2. Idk that I believe that we can’t end up with emergent behaviour from “depending on existing code”. LLMs are in a sense proof that emergent behaviour can occur even when trained primarily on existing data.
1 reply
0 recast
1 reaction
Timi
@timigod.eth
There was no reason to preemptively believe that the kind of training used for LLMs today—predicting the next token from massive datasets—would result in models that can understand language well enough to respond meaningfully.
1 reply
0 recast
1 reaction
Timi
@timigod.eth
Even scaling laws weren’t something we initially knew would work; they were discovered empirically. There was no theoretical guarantee that just making models bigger and feeding them more data would lead to more coherent, useful intelligence. Yet here we are.
1 reply
0 recast
1 reaction
Korede
@korede
Generally agree with your point. For context, I believe that AI tools today are pretty helpful, but they seem to struggle with really context/formal knowledge heavy tasks (e.g. game development). I think that MCP + very good prompt engineering (giving explicit links to contextual resources) works in the near term, but I'm thinking about ways to alleviate this requirement.
0 reply
0 recast
0 reaction