Content
@
0 reply
0 recast
0 reaction
Ben
@benersing
Who is locally running an AI model? - Which model(s)? - What’s your tech stack? - Will you keep doing it?
3 replies
0 recast
10 reactions
matt 💭
@matthewmorek
- DeepSeek Coder v2 (8-bit, MLX)) - LM Studio on 2024 M3 MAX MBP (36GB RAM) - Only just started, but after a weekend of fun with it, definitely!
2 replies
0 recast
1 reaction
Ben
@benersing
I've heard there can be high latency. Whats been your experience?
1 reply
0 recast
0 reaction
matt 💭
@matthewmorek
DeepSeek R1 might have a bit of a lag because it's a reasoning model, but Coder v2 (instructional) is comparable to GPT-4 Turbo, so incredibly quick.
0 reply
0 recast
1 reaction