Content
@
0 reply
0 recast
0 reaction
Stepan Gershuni
@stepa
Everyone forgot about Sora, the new hit of the week is Groq. The team created a custom ASIC for LLM, which allows generating ~500 tokens per second. For comparison, GPT averages 30 tokens/s.
1 reply
0 recast
0 reaction
Stepan Gershuni
@stepa
The ability to instantly and conditionally free read, analyze, and generate dozens of pages of text improves AI system performance. For clarity, think about the metric "LLM requests per user task":
1 reply
0 recast
0 reaction