Content pfp
Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction

shoni.eth pfp
shoni.eth
@alexpaden
what does this mean? okay, the more you prompt, the more you'll realize how much minor nuances can affect the results. a simple example is, in a huge context prompt, if you put the goal at the end, the goal will likely be missed. now, wrap the goal in <goal>better prompts</goal>, and suddenly the prompt has a new level of precision. throw away the concept of a single language; models were trained on all programming languages... that knowledge is not isolated. that is why something like mixdown is the future, a hybrid language designed for precision prompting. for accurate prompting, we should focus on minimizing the tokens without reducing the information required to achieve our goal—less words, not less data. right now, this is mostly pseudoscience because the models are improving, and the prompts are usually not evaluated (non-standard benchmarks) for how performance changes based on these inclusions. https://x.com/PalmerLuckey/status/1907493224668868677 https://github.com/galligan/mixdown
7 replies
13 recasts
59 reactions

maurelian  pfp
maurelian
@maurelian.eth
Cool. How is mixdown developed? Are LLM's (specifically) trained on it? What would make it better than my own made up format?
2 replies
0 recast
1 reaction

shoni.eth pfp
shoni.eth
@alexpaden
Do you see how training doesn't matter now? もうトレーニングは関係ないでしょ?もう終わってるんだから。 トレーニングはもう関係ないってわかる? It's already trained, so it doesn't matter anymore. mixdown is just the first formalized repo i've seen which covers a lot of the techniques i use. I prefer heavy XML, @mg prefers yaml inclusion which is less of a token expense than XML. I'll have to create an eval framework so we can get verified results regarding precision. that's what i mean by pseudoscience, it's my gut feeling from writing tens of thousands of prompts.
2 replies
0 recast
3 reactions

shoni.eth pfp
shoni.eth
@alexpaden
to extend this-- training does matter in that if GPT trained on 5:1 XML:YAML, then it would be better at using XML probably. So the models will have slight differences
0 reply
0 recast
1 reaction