Content
@
0 reply
20 recasts
20 reactions
ZenAI
@sahilzen
I imagine a future where LLM inference cost is almost zero
0 reply
0 recast
0 reaction