gm8xx8 pfp

gm8xx8

@gm8xx8

309 Following
25612 Followers


gm8xx8 pfp
gm8xx8
@gm8xx8
https://warpcast.com/gm8xx8/0xcfe86a7a
2 replies
1 recast
10 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
some llm.c updates … i’m looking forward to this walkthrough ngl! https://x.com/karpathy/status/1781387674978533427?s=46
0 reply
0 recast
4 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
i’ll leave this here The Rise and Potential of Large Language Model Based Agents: A Survey https://arxiv.org/abs/2309.07864 (again)…and this More Agents Is All You Need https://arxiv.org/abs/2402.05120
0 reply
0 recast
7 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
i’ll leave this here.
0 reply
1 recast
7 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
0 reply
2 recasts
22 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
Reka paper is out… Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models ↓ https://arxiv.org/abs/2404.12387
0 reply
0 recast
7 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
recently huggingface launched huggingchat on iOS, it went largely unnoticed. huggingchat allows for easy access to OS models… oh and yes llama 3 instruct is available to try 😉
0 reply
1 recast
10 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
having some fun with sub agents and listening to talks about llama 3… we are not the same lol https://github.com/Doriandarko/maestro
0 reply
0 recast
8 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
local agents… i’ll let that sink in Maestro x Ollama https://github.com/Doriandarko/maestro
0 reply
1 recast
9 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
https://warpcast.com/gm8xx8/0x9bc51321
0 reply
0 recast
3 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
https://x.com/karpathy/status/1781047292486914189?s=46
1 reply
0 recast
7 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
(checks phone) 200+ notifications (throws phone) if i don’t get back you don’t take it personal. running evals ✍️
0 reply
0 recast
4 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
great review of llama 3 from karpathy ☕️ https://x.com/karpathy/status/1781028605709234613?s=46
1 reply
2 recasts
10 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
two words structured extraction
1 reply
2 recasts
9 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
lol
0 reply
0 recast
5 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
apparently Llama 3 now at up to 350-380 tokens per second for Llama 3 8B and up to 150 tokens per second for Llama 3 70B. qroq ish ✔️
4 replies
1 recast
8 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
🦙🦙🦙
0 reply
0 recast
10 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
this. 👏 https://x.com/drjimfan/status/1781006672452038756?s=46
0 reply
1 recast
11 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
i’ll leave this here…
0 reply
2 recasts
10 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
Llama 3 MMLU: - 70B model 82%, surpassing Gemini Pro 1.5 & Claude 3 Sonnet. - 8B model 68.4%, outperforming Gemma 7B & Mistral 7B. (62.2% humaneval 68.4% mmlu on a 8B) llama 3 on huggingface ☺︎ https://huggingface.co/meta-llama/Meta-Llama-3-70B https://huggingface.co/meta-llama/Meta-Llama-3-8B
0 reply
2 recasts
8 reactions