Content
@
0 reply
0 recast
0 reaction
ππͺπΎπ‘π‘πΎ
@gm8xx8
the llamaβs are coming π 2 words β¦ local agents π
2 replies
2 recasts
27 reactions
BennyJ504 π©π΅
@bennyj504
Nice
0 reply
0 recast
0 reaction
eggman π΅
@eggman.eth
>small versions It upsets me that we seem to be locked into the 7B~ world as far as retail/os LLMs go. Seeing that GPT3 was built on 175B (and GPT4 allegedly running past 1.7T) was.. really a bit eye-opening for me. Granted I'm sure quality of data means a lot here - given Mixtral isn't too far off GPT3 w/ 8x7.
0 reply
0 recast
0 reaction