Content
@
0 reply
0 recast
0 reaction
@
0 reply
0 recast
0 reaction
eric siu 🐈
@randomishwalk
ngl code-llama has been better even claude-2 sometimes gpt-4 is not as good as it used to be
4 replies
0 recast
2 reactions
wizard not parzival
@alexpaden
when you say it is not as good as it used to be, what do you mean specifically? for example the code doesn't compile now, the logic behind the code is wrong, or something else?
1 reply
0 recast
1 reaction
Ryan
@ryanconnor
IMO, sadly, ChatGPT is getting worse at coding across all dimensions. Code is wrong, code is incomplete, answers have gotten lazier (code omitted), etc
1 reply
0 recast
1 reaction
wizard not parzival
@alexpaden
is there any number of lines of code that you reasonably trust GPT to generate still? i.e. before code is omitted or the answer/logic goes off the rails
2 replies
0 recast
1 reaction
Ryan
@ryanconnor
I used to trust the system to write complex logic, now the more boilerplate the better
0 reply
0 recast
1 reaction