Content
@
0 reply
0 recast
0 reaction
Noun 839
@noun839.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
10 replies
9 recasts
88 reactions
Andy C
@ajchatham
Working on something similar, really liking Ollama. Why not pull down Llama3? My goal is to add RAG with links to my smart home devices, /dimo , an inventory of tools and food shopping lists, and a calendar of maintenance tasks. Would be good to compare notes & setups at some point
0 reply
0 recast
0 reaction