Content
@
0 reply
0 recast
0 reaction
nicholas 🧨
@nicholas
Top end Mac Studio can run Llama 3.2 90B at full precision.
4 replies
0 recast
9 reactions
not parzival
@shoni.eth
yes i've done it :)
1 reply
0 recast
1 reaction
nicholas 🧨
@nicholas
what’s the most beastly model you’ve run locally?
1 reply
0 recast
0 reaction
not parzival
@shoni.eth
the new llama is where i tapped out, it was fast. im planning to deploy some trained 7B ones to production from the studio though otherwise my next upgrade would prob be tinygrad but no use case yet
1 reply
0 recast
1 reaction
nicholas 🧨
@nicholas
have you tried the networked unified memory across macs? would let me use my old 64GB m1 + m4 128GB to reach llama 3.2. but cludge abounds.
1 reply
0 recast
0 reaction
not parzival
@shoni.eth
no but it seems solid on minis. i’m hoping to get a second studio later so my only concern is the different generations in sync (haven’t tried before) training on cpu is a lot slow tho hence tinygrad is like 6-8 gpu
0 reply
0 recast
0 reaction