Content
@
0 reply
0 recast
0 reaction
ππͺπΎπ‘π‘πΎ
@gm8xx8
codellama 70b is finally here. base: https://huggingface.co/codellama/CodeLlama-70b-hf
3 replies
1 recast
14 reactions
ππͺπΎπ‘π‘πΎ
@gm8xx8
quantized CodeLlama 7b Python to 4-bit with MLX, -this model is now optimized for high-speed performance on apple silicon. https://huggingface.co/mlx-community/CodeLlama-7b-Python-4bit-MLX
0 reply
0 recast
5 reactions