Content
@
0 reply
0 recast
0 reaction
Alessandro
@azeni
how can one avoid building a chatgpt wrapper?
5 replies
0 recast
1 reaction
Britt Kim
@brittkim.eth
For LLM, grab a model from 🤗 and fine tune the head. But I still think you should make a wrapper that is interoperable with chatgpt, just in case they drop a v4.5 that blows everything away.
2 replies
0 recast
2 reactions
Britt Kim
@brittkim.eth
I just mean to abstract away from the model in code, that way you can plug-and-play with different models—including one that wraps the chatgpt api.
1 reply
0 recast
1 reaction
Alessandro
@azeni
i see. yes, agree. also the notion of the user 'picking the model' and having control over which to use seems to be a nice track. although i wonder if in 10+ years it's gonna be one mega base model, or every human with its own local model. like a tinybox. https://tinygrad.org/
1 reply
0 recast
1 reaction
Britt Kim
@brittkim.eth
That’s the question! I’m hoping for Hotz’s bet to pay out. Even if true that error rates decrease only by increasing model parameters, we just need a point were the error rates are sufficiently low to not justify increased spending (usually better-than-human performance). Then we wait for Huang's law.
1 reply
0 recast
2 reactions