Content
@
0 reply
0 recast
0 reaction
Noun 839
@noun839.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
10 replies
9 recasts
88 reactions
chris 🤘🏻
@ckurdziel.eth
this is pretty awesome. is there a primer on how to set this up? and what kind of hardware do you need to do it?
1 reply
0 recast
1 reaction
chris 🤘🏻
@ckurdziel.eth
ok I have it up and running. pretty cool. but still curious whether you have it set up locally or via some sort of docker container and available on your network - the latter would be a cool way to do it but I don't have a lot of extra beefy hardware
2 replies
0 recast
0 reaction
Noun 839
@noun839.eth
I'm running Ollama directly on the laptop but Open WebUI running in a docker. https://openwebui.com/
1 reply
0 recast
1 reaction
chris 🤘🏻
@ckurdziel.eth
how is obsidian connected? is it an obisidan plugin that uses ollama or are you giving ollama access to the obsidian graph somehow? what are the most common use cases.
1 reply
0 recast
0 reaction
Noun 839
@noun839.eth
Currently an Obsidian plugin that uses ollama. Not sure what the best use cases will be, less about querying my short notes and more about taking notes in the LLM and having it output the summary to copy and paste into the Note (ie meeting notes). Next I'll work on giving access to the graph.
1 reply
0 recast
1 reaction