Content
@
0 reply
0 recast
0 reaction
Noun 839
@noun839.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
10 replies
9 recasts
88 reactions
Taye 🎩🔵 👽⛏️
@casedup
What you plan on doing with it? What made you do a homebrew?
1 reply
0 recast
1 reaction
Noun 839
@noun839.eth
At first just because I could. Then when I did I was blown away by how much faster it is. Then played around and realized there is a lot more you can do with local documents, so feel like I can train it perfectly with time. Bonus its all open source, even the models.
1 reply
0 recast
0 reaction
Taye 🎩🔵 👽⛏️
@casedup
Hmmm I'm intrigued. Have any links on getting started from the basics?
1 reply
0 recast
0 reaction
Noun 839
@noun839.eth
This is what hooked me. https://www.youtube.com/watch?v=Wjrdr0NU4Sk
1 reply
1 recast
1 reaction
Taye 🎩🔵 👽⛏️
@casedup
Network chuck does everything. Surprised he isn't on Warpcast. Thanks!
0 reply
0 recast
0 reaction