Austin Griffith
@austingriffith
idk yapping at an airgapped ollama hits different
3 replies
3 recasts
21 reactions
depatchedmode
@depatchedmode
I still havenβt ventured into local LLMs. How resource intensive is it?
1 reply
0 recast
0 reaction
Dan Finlay π¦
@danfinlay
Lmstudio makes it stupid easy to run Llama 8B on an M-series mac, and even that is surprisingly good. https://lmstudio.ai/ You'll want to have the computer plugged in or you'll have significantly less battery life.
1 reply
0 recast
1 reaction
depatchedmode
@depatchedmode
Sweet! Gonna give it a spin. I guess we ainβt running the 80b models on a laptop yet eh.
0 reply
0 recast
1 reaction