Content pfp
Content
@
0 reply
0 recast
2 reactions

downshift pfp
downshift
@downshift.eth
my hot take on AI-assisted coding: it's now easier than ever to tangle yourself into a big ball of unmaintainable spaghetti if you don't know what you're doing but competent teams can also achieve very scalable + well-engineered systems much faster than ever before the human in the loop is still absolutely crucial
12 replies
5 recasts
35 reactions

Matthew Fox 🌐 pfp
Matthew Fox 🌐
@matthewfox
spot on but more and more I am getting convinced this is mainly a reasoning limitation that can be fixed with enough agents in the pipeline its the one shot that makes it so inconsistent, the extra context helps but breaks character limits and muddies the problem solving Half the time you can see yourself that the ai is wrong while its writing a solution but it has nonway to course correct at the minute
2 replies
1 recast
3 reactions

Royal pfp
Royal
@royalaid.eth
I agree with @downshift.eth completely, right now human in the loop + composer + good parsable docs + .cursorrules leads to an incredibly pleasant dev experience. I was able to ship @giftbot in about 8 hours of actual work and most of that was dealing with the fact that /frames-v2 is bleeding edge and I don't have use NextJS as an API server often, once those issues were out of the way everything fell into place super fast and I was able to iterate on UX issues quickly. The comparison to the composer/agent mode being like a junior dev is probably the most apt, you have to course correct it but it is capable! re: stacking agents, it will paper over the problem but the fundamental limit is still context because all multiple agents do is recompress the context, akin to adding more machines to scale. Some work will be fundamentally limited to one agent i.e. codebase wide refactors. The larger the context window the more effective the agent, assuming needle in a haystack can be beaten as we scale windows.
1 reply
0 recast
2 reactions

downshift pfp
downshift
@downshift.eth
i think you are correct that we can improve a *lot* with current tech but, counterpoint: great system design can exist at the edge of even humans’ cognitive abilities and requires experience and creativity in bespoke situations that are beyond what’s available in a LLM’s training set (however huge) perhaps we need large models that are somehow trained on a higher dimension than serialized tokens, or on data that is otherwise modeled to better fit a specific domain like distributed systems? (cc @swabbie.eth) my head hurts 😅
1 reply
0 recast
2 reactions