Content pfp
Content
@
0 reply
0 recast
0 reaction

Alessandro pfp
Alessandro
@azeni
how can one avoid building a chatgpt wrapper?
5 replies
0 recast
1 reaction

Alessandro pfp
Alessandro
@azeni
a key point might be proprietary data.
1 reply
0 recast
0 reaction

Britt Kim pfp
Britt Kim
@brittkim.eth
For LLM, grab a model from πŸ€— and fine tune the head. But I still think you should make a wrapper that is interoperable with chatgpt, just in case they drop a v4.5 that blows everything away.
2 replies
0 recast
2 reactions

Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
Everything in LLMs is built on top of a base model. There are just different strategies to achieve what you need. I guess what people call a "wrapper" is when prompt engineering is used. Maybe tuning will get you away from the "wrapper" moniker?
2 replies
0 recast
2 reactions

Michael Huang pfp
Michael Huang
@michaelhly
i'd also start with hugging face https://huggingface.co/models
0 reply
0 recast
1 reaction