Content
@
0 reply
0 recast
0 reaction
Yondon Fu
@yondon.eth
A way to think about finetuning v. prompt eng (+ any technique w prompting like RAG) that I've been playing with: Who takes on more of the burden of nudging the model to produce consistent good outputs for a domain - the model consumer at prompting time or the model trainer via examples at training time?
2 replies
0 recast
2 reactions
ByteBuddha
@bytebuddha
how does zero shot/one shot/n-shot prompting fit into this? whats the tipping point for where the burden should ideally lie?
1 reply
0 recast
0 reaction
Yondon Fu
@yondon.eth
Seems that n-shot prompting puts burden on model consumer but then it becomes a question of how much burden the app takes on with prompt templates vs. the end user. Good q re: tipping point - prob depends on domain and maybe discovered by pushing n-shot prompting until results fail to meet your consistency req.
1 reply
0 recast
0 reaction