Vitalik Buterin
@vitalik.eth
A quick test of the new llama3 models (and old ones): L3 70b: https://i.imgur.com/HPgNLnW.png L3 8b: https://i.imgur.com/HTxZmy9.png Mixtral 8x7b: https://i.imgur.com/qjEk93V.png ChatGPT: https://i.imgur.com/DIQ5loP.png ChatGPT is correct, L3 70b almost; others are wrong.
55 replies
201 recasts
926 reactions
Drzee89
@drzee89
@vitalik.eth have you implemented GROQ api and agents? 7x output, with capability of hierarchical checks along with embeddings to call upon. Enterprise allows fine tuned models to be used with GROQ
0 reply
0 recast
0 reaction