Content
@
0 reply
0 recast
0 reaction
avi
@avichalp.eth
running llama locally has become really convenient with this project. tried a 7B model on an M1 mbp. ran very smoothly. it also comes with a web ui https://github.com/Mozilla-Ocho/llamafile
0 reply
0 recast
1 reaction