Content pfp
Content
@
0 reply
0 recast
0 reaction

nicholas 🧨 pfp
nicholas 🧨
@nicholas
Top end Mac Studio can run Llama 3.2 90B at full precision.
4 replies
0 recast
9 reactions

not parzival pfp
not parzival
@shoni.eth
yes i've done it :)
1 reply
0 recast
1 reaction

nicholas 🧨 pfp
nicholas 🧨
@nicholas
what’s the most beastly model you’ve run locally?
1 reply
0 recast
0 reaction

not parzival pfp
not parzival
@shoni.eth
the new llama is where i tapped out, it was fast. im planning to deploy some trained 7B ones to production from the studio though otherwise my next upgrade would prob be tinygrad but no use case yet
1 reply
0 recast
1 reaction

nicholas 🧨 pfp
nicholas 🧨
@nicholas
have you tried the networked unified memory across macs? would let me use my old 64GB m1 + m4 128GB to reach llama 3.2. but cludge abounds.
1 reply
0 recast
0 reaction