grin
@grin
just spent an hour helping a friend debug some lovable-generated code. he's on hour 6 with this bug super bearish on vibecoding with llms. they're great for prototyping but cannot be trusted we need a different model architecture that can actually understand and reason about the code
13 replies
3 recasts
38 reactions
Shira Stember
@shira
I do not have a technical background, but feel more tech saavy than most. I was PUMPED to experiment with these new tools that claim to "turn your ideas into apps with AI" so I could to start building things on my own. But after spending time learning how tools like replit & cursor work and trying to turn big and small ideas into something I am convinced these tools are not yet built for me. They still require a foundational level of technical understanding in order for the code to be clean, functional, secure and not overly complex. I'm optimistic that they are moving in the right direction and things will only improve in parallel to me learning more, but perhaps there needs to be another layer or application developed that would provide a trust or security score to give vibe coders more confidence in their applications?
1 reply
0 recast
1 reaction
grin
@grin
agreed
0 reply
0 recast
1 reaction