
Pany Fros
@xoxox
710 Following
414 Followers
25 replies
83 recasts
411 reactions

what does this mean?
okay, the more you prompt, the more you'll realize how much minor nuances can affect the results. a simple example is, in a huge context prompt, if you put the goal at the end, the goal will likely be missed. now, wrap the goal in <goal>better prompts</goal>, and suddenly the prompt has a new level of precision.
throw away the concept of a single language; models were trained on all programming languages... that knowledge is not isolated.
that is why something like mixdown is the future, a hybrid language designed for precision prompting. for accurate prompting, we should focus on minimizing the tokens without reducing the information required to achieve our goalโless words, not less data.
right now, this is mostly pseudoscience because the models are improving, and the prompts are usually not evaluated (non-standard benchmarks) for how performance changes based on these inclusions.
https://x.com/PalmerLuckey/status/1907493224668868677
https://github.com/galligan/mixdown 7 replies
13 recasts
59 reactions
0 reply
0 recast
0 reaction
10 replies
9 recasts
90 reactions
0 reply
0 recast
1 reaction
0 reply
2 recasts
3 reactions
0 reply
0 recast
0 reaction
84 replies
103 recasts
454 reactions
9 replies
8 recasts
82 reactions
8 replies
22 recasts
175 reactions
101 replies
229 recasts
704 reactions
3 replies
2 recasts
5 reactions
0 reply
0 recast
3 reactions
192 replies
126 recasts
865 reactions
0 reply
1 recast
3 reactions
2 replies
5 recasts
20 reactions
0 reply
1 recast
2 reactions
0 reply
1 recast
1 reaction
0 reply
1 recast
3 reactions
0 reply
0 recast
0 reaction