Content
@
https://warpcast.com/~/channel/privacy
0 reply
0 recast
0 reaction
meatballs
@meatballs
My GPU doesn't have enough vram to get decent results from Ollama running locally. Any suggestions for how best to spend funds to get private llm facilities?
1 reply
1 recast
2 reactions
termit89
@termit89
Consider investing in a cloud GPU service like AWS, Google Cloud, or Azure for running Ollama. It provides the flexibility to scale resources based on your needs and avoids upfront hardware costs.
0 reply
0 recast
0 reaction