Dan Romero pfp
Dan Romero
@dwr.eth
For AI folks, how big of a deal is this? Scale of 1-10? cc @pushix @theodormarcu @scharf https://twitter.com/ylecun/status/1681336284453781505
19 replies
3 recasts
42 reactions

Nicholas Charriere pfp
Nicholas Charriere
@pushix
Very big deal. My personal bet is that 2 years from now most people are running fine tuned llamas and openAI’s market share takes a big hit.
2 replies
0 recast
8 reactions

Theodor Marcu pfp
Theodor Marcu
@theodormarcu
insanely big deal. costs are coming down very fast that being said, not as big if you know that many companies were already using llama 1 despite it being "non commercial" 🤫
2 replies
0 recast
3 reactions

Ben Scharfstein pfp
Ben Scharfstein
@scharf
I think it means gpt-3.5 won’t get used as much, gpt-4 can still do things that Llama 2 can’t. It’s not *that* big a deal though because I think everyone expected this so happen soon
0 reply
0 recast
3 reactions

Max Miner pfp
Max Miner
@mxmnr
Seems pretty huge with the combined 'commercial use' designation and models at 7B, 13B and 70B parameters. All available via Hugging Face etc. They're providing a meaningful alternative to OpenAI (Microsoft backed) and Anthropic (Google backed).
0 reply
0 recast
5 reactions

PhiMarHal pfp
PhiMarHal
@phimarhal
Solid 9. It's not GPT3.5 tier yet, let alone 4. But it's a solid step up from previous opensource models. The potential here lies in opensource finetuning.
0 reply
0 recast
3 reactions

Sam (crazy candle person) ✦  pfp
Sam (crazy candle person) ✦
@samantha
It’s yuge we were literally talking about a better AI on our all hands today!!
0 reply
0 recast
2 reactions

BBB 👊 pfp
BBB 👊
@bc-zip.eth
for those interested in tinkering: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
0 reply
2 recasts
12 reactions

🎩 MxVoid 🎩 pfp
🎩 MxVoid 🎩
@mxvoid
It’s a BFD, about a 10. Open sourcing these tools (with commercial use!) lets people tinker without worrying about APIs, tokens, getting their access cut off due to unexpected downtime, etc. Allows fine-tuning for specific use cases, e.g., training it on your own codebase for a better, customized AI assistant.
1 reply
0 recast
7 reactions

gm8xx8 pfp
gm8xx8
@gm8xx8
Scale of models, performance, cost, & open sourced… Yes this is a big deal.
0 reply
0 recast
5 reactions

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
Big deal for commercialization. Lots of teams were previously using Llama for research but not product, and switching to other weaker weight sets with clean rights. This should unleash a bunch of products from limbo.
0 reply
0 recast
4 reactions

Dwayne 'The Jock' Ronson pfp
Dwayne 'The Jock' Ronson
@dwayne
https://twitter.com/phildaian/status/1681505848198152192
0 reply
0 recast
3 reactions

m_j_r pfp
m_j_r
@m-j-r.eth
🌶️- GPT-4 is allegedly a MoE that can't run in open sense on consumer devices, maybe it's possible on architecture like Petals. point being, pretty much all embodied agent research depends on the emergent reasoning of that architecture vs LLMs like Llama/Orca/etc. chat apps will be more competitive, though.
0 reply
0 recast
2 reactions

aerique pfp
aerique
@aerique.eth
Time to replace Llama v1 with v2 on my phone (if possible 😅). https://mirror.xyz/xanny.eth/TBgwcBOoP9LZC6Mf570fG8VvZWhEn_uWZPHy3axIpsI
0 reply
0 recast
1 reaction

Giuliano Giacaglia pfp
Giuliano Giacaglia
@giu
This is pretty big news given that now OpenAI edge reduced by a fair amount
0 reply
0 recast
1 reaction

j4ck 🥶↑🎩 icebreaker pfp
j4ck 🥶↑🎩 icebreaker
@j4ck.eth
@web3pm
1 reply
0 recast
1 reaction

Daniel Lombraña  pfp
Daniel Lombraña
@teleyinex.eth
The catch is the hardware that you might need to run this fast and in a proper way. While os is the way to go, the catch has been always that. Google has been doing it for years with their tensor flow solution.
0 reply
0 recast
0 reaction

jamesyoung.eth pfp
jamesyoung.eth
@jamesyoung
it is more about MS posturing - OpenAI, Meta, Nvidia, GitHub. Why Azure? (roots go back Satya) https://twitter.com/alex_valaitis/status/1681348531834044426?s=46
0 reply
0 recast
1 reaction

Eric Platon pfp
Eric Platon
@ic
Stepped back on high mark. Big potential, but the rumored secret architecture changes that led to GPT-4 may well make all this deprecated early (not obsolete, but…) Meaning that to reach more appealing GPT-4 “level”, the v2 lineage may need overhaul, and may not work easily on “reasonable” hardware.
1 reply
0 recast
0 reaction

BrightFutureGuy 🎩🔮↑ pfp
BrightFutureGuy 🎩🔮↑
@bfg
Unfortunately it’s as big as it gets - so 12 🫨 cos Microsoft & Zuk just made themselves more bulletproof ☹️
0 reply
0 recast
0 reaction