franco pfp
franco
@francos.eth
I want build my local LLM server. Thinking of using a cluster of mac minis instead of a NVIDIA GPUs. Anyone build one? "Not your weights, not your brain." - Andrej Karpathy https://exolabs.net/
1 reply
5 recasts
33 reactions

HH pfp
HH
@hamud
It might be cheaper to buy server cpu ram and run the llm on that.
1 reply
0 recast
3 reactions

franco pfp
franco
@francos.eth
Yeah, that's the other option but then I'd also need 2x NVIDIA P40 (for 70b Llama 3) and additional fan, shroud and power converter cable for each one because they're server grade cards.
0 reply
0 recast
1 reaction