Content pfp
Content
@
0 reply
0 recast
0 reaction

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
the llama’s are coming πŸ‘€ 2 words … local agents 😈
2 replies
2 recasts
33 reactions

BennyJ504 πŸŽ©πŸ”΅ pfp
BennyJ504 πŸŽ©πŸ”΅
@bennyj504
Nice
0 reply
0 recast
0 reaction

eggman πŸ”΅ pfp
eggman πŸ”΅
@eggman.eth
>small versions It upsets me that we seem to be locked into the 7B~ world as far as retail/os LLMs go. Seeing that GPT3 was built on 175B (and GPT4 allegedly running past 1.7T) was.. really a bit eye-opening for me. Granted I'm sure quality of data means a lot here - given Mixtral isn't too far off GPT3 w/ 8x7.
0 reply
0 recast
0 reaction