Content pfp
Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction

shoni.eth pfp
shoni.eth
@alexpaden
what does this mean? okay, the more you prompt, the more you'll realize how much minor nuances can affect the results. a simple example is, in a huge context prompt, if you put the goal at the end, the goal will likely be missed. now, wrap the goal in <goal>better prompts</goal>, and suddenly the prompt has a new level of precision. throw away the concept of a single language; models were trained on all programming languages... that knowledge is not isolated. that is why something like mixdown is the future, a hybrid language designed for precision prompting. for accurate prompting, we should focus on minimizing the tokens without reducing the information required to achieve our goal—less words, not less data. right now, this is mostly pseudoscience because the models are improving, and the prompts are usually not evaluated (non-standard benchmarks) for how performance changes based on these inclusions. https://x.com/PalmerLuckey/status/1907493224668868677 https://github.com/galligan/mixdown
7 replies
13 recasts
58 reactions

Crescenta pfp
Crescenta
@dandegreat
Thank you for sharing this, it is very helpful and useful to me.
1 reply
0 recast
1 reaction

shoni.eth pfp
shoni.eth
@alexpaden
i’ll try to make similar content in the future 👍 thanks
0 reply
0 recast
0 reaction