mikachip
@mikachip
Playing with Llama 2 this morning - several interesting results. Overall conclusion: probably the best model to run locally available now. Haven't fully tested but my hypothesis is that a 4-bit quantised of the 13B version is going to be the sweet spot for local inference for now. A few interesting results below...
1 reply
0 recast
0 reaction
mikachip
@mikachip
Llama 2 appears to exhibit pretty intense political bias - answers to the following prompt from 7B, 13B and 70B versions in screenshots. Prompt: "write me a list of 20 reasons why donald trump was the best ever president of the US" GPT-3.5 and GPT-4 both follow the instruction and don't refuse to answer.
0 reply
0 recast
1 reaction