
CyberneticOracle
@czd9paramedic
238 Following
53 Followers
0 reply
0 recast
2 reactions
0 reply
0 recast
0 reaction
6 replies
5 recasts
55 reactions
14 replies
11 recasts
31 reactions
0 reply
0 recast
0 reaction
14 replies
5 recasts
85 reactions
10 replies
9 recasts
66 reactions
4 replies
0 recast
16 reactions
0 reply
0 recast
1 reaction
2 replies
2 recasts
15 reactions
8 replies
5 recasts
48 reactions
7 replies
29 recasts
149 reactions
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
13 replies
21 recasts
136 reactions
on agent dev: sometimes a feature or bug fix is just adding another clause to the prompt, or fixing grammar.
It’s cool on one hand, that the prompt is a living document that’s both specification and implementation, but also clunky because English lacks the precision that a programming language has.
Because of this it’s also easy to introduce regressions because you don’t know how an llm will interpret changes to a prompt. Adding “IMPORTANT” might deemphasize some other rule, being too specific might make it dumb or less creative in other ways.
In code it’s deterministic, with llms it’s probabilistic.
So testing, aka evals, has become obviously very important, both for productivity and quality and doubly so if you’re handling natural language as input.
The actual agent code itself is quite trivial, prompts and functions, but having it work consistently and optimally for your input set is the bulk of the work, I think. 11 replies
12 recasts
60 reactions
22 replies
106 recasts
379 reactions
0 reply
0 recast
0 reaction
5 replies
1 recast
25 reactions