Content pfp
Content
@
0 reply
0 recast
0 reaction

ccarella pfp
ccarella
@ccarella.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
10 replies
3 recasts
93 reactions

chris 🤘🏻 pfp
chris 🤘🏻
@ckurdziel.eth
this is pretty awesome. is there a primer on how to set this up? and what kind of hardware do you need to do it?
1 reply
0 recast
1 reaction

chris 🤘🏻 pfp
chris 🤘🏻
@ckurdziel.eth
ok I have it up and running. pretty cool. but still curious whether you have it set up locally or via some sort of docker container and available on your network - the latter would be a cool way to do it but I don't have a lot of extra beefy hardware
2 replies
0 recast
0 reaction