Content
@
https://warpcast.com/~/channel/ccarella
0 reply
0 recast
0 reaction
Chris Carella
@ccarella.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
10 replies
9 recasts
77 reactions
Ben
@benersing
What's been the biggest learning in setting it up?
1 reply
0 recast
0 reaction
Chris Carella
@ccarella.eth
I'm bullish on whatever Apple decides to do. I kind of thought you couldn't compete with the cloud and that's just not true. You can totally run them locally and they are even faster that way.
1 reply
0 recast
0 reaction
Ben
@benersing
Interesting. Are you storing all your data locally as well?
1 reply
0 recast
0 reaction