Content
@
0 reply
0 recast
0 reaction
Alessandro
@azeni
how can one avoid building a chatgpt wrapper?
5 replies
0 recast
1 reaction
Gabriel Ayuso
@gabrielayuso.eth
Everything in LLMs is built on top of a base model. There are just different strategies to achieve what you need. I guess what people call a "wrapper" is when prompt engineering is used. Maybe tuning will get you away from the "wrapper" moniker?
2 replies
0 recast
2 reactions
robeee
@robeee
I've been wondering this question: How much "data" is needed to tune the base model so you have something proprietary?
1 reply
0 recast
1 reaction
Gabriel Ayuso
@gabrielayuso.eth
It depends on the use case, both the nature of the data and how the tuned model will be used. You'll have to try different methods and check what produces the best outcomes for your needs. In many cases just a few-shot prompt over the base model will do. Might be slow and expensive though.
0 reply
0 recast
1 reaction