Austin Griffith
@austingriffith
idk yapping at an airgapped ollama hits different
3 replies
3 recasts
22 reactions
depatchedmode
@depatchedmode
I still havenโt ventured into local LLMs. How resource intensive is it?
1 reply
0 recast
0 reaction
Dan Finlay ๐ฆ
@danfinlay
Lmstudio makes it stupid easy to run Llama 8B on an M-series mac, and even that is surprisingly good. https://lmstudio.ai/ You'll want to have the computer plugged in or you'll have significantly less battery life.
1 reply
0 recast
1 reaction