
Web3Ninja
@856
1208 Following
485 Followers
0 reply
0 recast
0 reaction

what does this mean?
okay, the more you prompt, the more you'll realize how much minor nuances can affect the results. a simple example is, in a huge context prompt, if you put the goal at the end, the goal will likely be missed. now, wrap the goal in <goal>better prompts</goal>, and suddenly the prompt has a new level of precision.
throw away the concept of a single language; models were trained on all programming languages... that knowledge is not isolated.
that is why something like mixdown is the future, a hybrid language designed for precision prompting. for accurate prompting, we should focus on minimizing the tokens without reducing the information required to achieve our goal—less words, not less data.
right now, this is mostly pseudoscience because the models are improving, and the prompts are usually not evaluated (non-standard benchmarks) for how performance changes based on these inclusions.
https://x.com/PalmerLuckey/status/1907493224668868677
https://github.com/galligan/mixdown 6 replies
7 recasts
38 reactions
0 reply
0 recast
0 reaction
4 replies
0 recast
6 reactions
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
21 replies
23 recasts
97 reactions
0 reply
0 recast
0 reaction
11 replies
0 recast
28 reactions
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
5 replies
19 recasts
34 reactions
0 reply
0 recast
0 reaction
21 replies
31 recasts
132 reactions
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
14 replies
3 recasts
35 reactions
0 reply
0 recast
0 reaction
7 replies
6 recasts
66 reactions
0 reply
0 recast
0 reaction