Content
@
0 reply
0 recast
0 reaction
Noun 839
@noun839.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
10 replies
9 recasts
88 reactions
drewcoffman
@drewcoffman.eth
are you still doing this / using this?
1 reply
0 recast
0 reaction
Noun 839
@noun839.eth
No, its cool but not the best on mobile and then ultimately I had one llm for my phone and then another for my desktop and it wasn't that convenient for an everyday thing.
0 reply
0 recast
1 reaction