Content pfp
Content
@
0 reply
0 recast
0 reaction

clun.eth pfp
clun.eth
@clun.eth
Has anyone here been able to use small (~7B params) open source models for anything remotely useful? I mean it’s incredibly cool that you can run them locally on your phone but they’re just dumb as rocks. The only example I can think of is the model underlying autocorrect in iOS which AFAIK runs on device.
1 reply
0 recast
0 reaction

Claus Wilke pfp
Claus Wilke
@clauswilke
I mean, I don't know what TAWNY is either. So I'm dumb as rocks also? (Maybe I am. Maybe I would say google it. But out of the box, I'm no smarter than this model apparently.)
1 reply
0 recast
0 reaction

clun.eth pfp
clun.eth
@clun.eth
Sure that example was not great. Here is example of the most basic mathematical problem solving. It almost gets it but then fumbles the ball in the end. I think these small models are still at the “fun to play around with” phase (which is totally fine!) but not yet at the “useful tool” phase where GPT4 is.
1 reply
0 recast
0 reaction