Content
@
0 reply
0 recast
0 reaction
ByteBuddha
@bytebuddha
is there more demand for llm as a cloud service via API, or for locally run llm? eg. if cost was not an issue, would people prefer to run, say, mixtral via API calls or on-device locally (on-prem) where's the industry demand more? I suppose it's fundamentally a data privacy question
0 reply
0 recast
0 reaction