Content
@
0 reply
0 recast
0 reaction
nicholas 🧨
@nicholas
Top end Mac Studio can run Llama 3.2 90B at full precision.
4 replies
0 recast
9 reactions
not parzival
@shoni.eth
yes i've done it :)
1 reply
0 recast
1 reaction
nicholas 🧨
@nicholas
what’s the most beastly model you’ve run locally?
1 reply
0 recast
0 reaction
not parzival
@shoni.eth
the new llama is where i tapped out, it was fast. im planning to deploy some trained 7B ones to production from the studio though otherwise my next upgrade would prob be tinygrad but no use case yet
1 reply
0 recast
1 reaction