Austin Griffith pfp
Austin Griffith
@austingriffith
idk yapping at an airgapped ollama hits different
3 replies
3 recasts
21 reactions

depatchedmode pfp
depatchedmode
@depatchedmode
I still haven’t ventured into local LLMs. How resource intensive is it?
1 reply
0 recast
0 reaction

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
Lmstudio makes it stupid easy to run Llama 8B on an M-series mac, and even that is surprisingly good. https://lmstudio.ai/ You'll want to have the computer plugged in or you'll have significantly less battery life.
1 reply
0 recast
1 reaction