franco
@francos.eth
I want build my local LLM server. Thinking of using a cluster of mac minis instead of a NVIDIA GPUs. Anyone build one? "Not your weights, not your brain." - Andrej Karpathy https://exolabs.net/
1 reply
5 recasts
33 reactions
HH
@hamud
It might be cheaper to buy server cpu ram and run the llm on that.
1 reply
0 recast
3 reactions