Tony D’Addeo
@deodad
my latest annoyance with trying to get llms to generate code for me is they always return a ton of irrelevant context. me: write a function that does X llm: sure that's a great idea—let me do that for you and continue to say a bunch of intro text that is conversational but not relevant! ``` <full html doc with style declarations> <script> code I asked for </script> </full html doc with style declarations>
7 replies
4 recasts
21 reactions
Dan Romero
@dwr.eth
Have you saved metaprompt in settings? You can instruct it to only provide code output and no intro / filler text.
1 reply
0 recast
10 reactions
Jacob
@jrf
i'd imagine that chatgpt is probably better at this than cursor what are you using?
1 reply
0 recast
1 reaction
shoni.eth
@alexpaden
Common things I use as per memory 1. fix my grammar 2. stay lowercase 3. stay concise 4. just return the result 5. don’t add words 6. don’t change my tone 7. reword but keep structure 8. tell it like it is 9. use markdown or xml style 10. don’t sugar-coat 11. be laconic 12. walk me through 13. what did i miss? 14. correct only what’s wrong
0 reply
0 recast
3 reactions
hellno the optimist
@hellno.eth
Claude 3.7 is the worst at this, but great for visual / frontend tasks. OpenAI models follow instructions better
0 reply
0 recast
2 reactions
will
@w
apparently the latest oai releases are a stepchange in output quality but also as dwr mentioned a good metaprompt can help a lot here
0 reply
0 recast
1 reaction
Koolkheart
@koolkheart.eth
It’s like they trained these models to write Medium blog posts first and actual code second
0 reply
0 recast
0 reaction
applefather.eth
@applefather.eth
claude 3.7 do this, 3.5 is good fit for me
0 reply
0 recast
0 reaction