Content pfp
Content
@
0 reply
0 recast
0 reaction

Matthew pfp
Matthew
@matthew
My alarm bells always go off when I hear someone say "you don't need to know how to code now, you can build entire apps with LLMs." In theory yes, an LLM can generate an app if you prompt it right. But it's way more effective at prototyping interfaces, writing helper functions / sdks, making code simpler, modifying slightly, etc. *Not* one shotting an entire app on its own. I have not had a single instance where ChatGPT has written an entire app for me that I didn't need to write a ton of custom code to modify afterwards. What am I missing? Is it all just marketing delusion when ppl say that?
8 replies
1 recast
29 reactions

Mo pfp
Mo
@meb
Until we reach AGIs that can plan and execute better than humans, LLMs thrive with constrained context and a very clear goal. Sure, you can zero shot a working snake app, but building real world commercial software requires a lot more than just code writing skills
1 reply
0 recast
1 reaction

Matthew pfp
Matthew
@matthew
RE your latter point, exactly. But on AGI, while I don't know that much about ML I think even if LLMs get 10x "better", skilled human + LLM will always outmatch an LLM or human on their own.
0 reply
0 recast
0 reaction