Content pfp
Content
@
https://warpcast.com/~/channel/ccarella
0 reply
0 recast
0 reaction

Chris Carella pfp
Chris Carella
@ccarella.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
10 replies
9 recasts
77 reactions

Jamie Dubs pfp
Jamie Dubs
@jamiew
how are you accessing it remotely? Love idea of wiring it up to my Obsidian; especially if I can use on the go
1 reply
0 recast
0 reaction

Chris Carella pfp
Chris Carella
@ccarella.eth
You can just host it on a public webserver but I use NordVPN and they have a VPN Mesh feature where all your machines are on the same virtual network no matter where you are.
0 reply
0 recast
1 reaction