Content
@
0 reply
0 recast
0 reaction
Yondon Fu
@yondon.eth
A way to think about finetuning v. prompt eng (+ any technique w prompting like RAG) that I've been playing with: Who takes on more of the burden of nudging the model to produce consistent good outputs for a domain - the model consumer at prompting time or the model trainer via examples at training time?
2 replies
0 recast
2 reactions
ByteBuddha
@bytebuddha
how does zero shot/one shot/n-shot prompting fit into this? whats the tipping point for where the burden should ideally lie?
1 reply
0 recast
0 reaction
LoadingALIAS
@loadingalias
This is a great way to visualize it. Iโve been exploring both and I think itโs important to abstract as much of the โnudgingโ away as possible to make it a happy experience for consumers. Fine-Tube + RAG access(confidence scoring) is the happy middle ground.
1 reply
0 recast
0 reaction