Content
@
0 reply
0 recast
0 reaction
@
0 reply
0 recast
0 reaction
eric siu π
@randomishwalk
ngl code-llama has been better even claude-2 sometimes gpt-4 is not as good as it used to be
4 replies
0 recast
2 reactions
wizard not parzival
@alexpaden
when you say it is not as good as it used to be, what do you mean specifically? for example the code doesn't compile now, the logic behind the code is wrong, or something else?
1 reply
0 recast
1 reaction
Ryan
@ryanconnor
IMO, sadly, ChatGPT is getting worse at coding across all dimensions. Code is wrong, code is incomplete, answers have gotten lazier (code omitted), etc
1 reply
0 recast
1 reaction
wizard not parzival
@alexpaden
is there any number of lines of code that you reasonably trust GPT to generate still? i.e. before code is omitted or the answer/logic goes off the rails
2 replies
0 recast
1 reaction
eric siu π
@randomishwalk
idk if itβs lines of code or specific types of code β iβm not sure i can diagnose it well given iβm not particularly technical :/ if i had to say where id draw the line β anything beyond simple python scripts for menial task automation or boilerplate CSS / HTML, id be very wary of GPT tbh
0 reply
0 recast
1 reaction