Content pfp
Content
@
0 reply
0 recast
0 reaction

Alessandro pfp
Alessandro
@azeni
how can one avoid building a chatgpt wrapper?
5 replies
0 recast
1 reaction

Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
Everything in LLMs is built on top of a base model. There are just different strategies to achieve what you need. I guess what people call a "wrapper" is when prompt engineering is used. Maybe tuning will get you away from the "wrapper" moniker?
2 replies
0 recast
2 reactions

robeee pfp
robeee
@robeee
I've been wondering this question: How much "data" is needed to tune the base model so you have something proprietary?
1 reply
0 recast
1 reaction

Alessandro pfp
Alessandro
@azeni
i suspect tuning will play a big role in creating value that can't be captured by the owners of base models. but still a tough nut to crack. we might go through something similar as the dot com crash. specially if openai keeps shipping so firmly as they do. they understand the market and adapt accordingly.
0 reply
0 recast
0 reaction