Content pfp
Content
@
0 reply
0 recast
0 reaction

ccarella pfp
ccarella
@ccarella.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
6 replies
8 recasts
78 reactions

Taye 🎩🔵 👽⛏️ pfp
Taye 🎩🔵 👽⛏️
@casedup
What you plan on doing with it? What made you do a homebrew?
1 reply
0 recast
2 reactions

ccarella pfp
ccarella
@ccarella.eth
At first just because I could. Then when I did I was blown away by how much faster it is. Then played around and realized there is a lot more you can do with local documents, so feel like I can train it perfectly with time. Bonus its all open source, even the models.
1 reply
0 recast
0 reaction