Content
@
0 reply
0 recast
0 reaction
ccarella
@ccarella.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
6 replies
8 recasts
78 reactions
ccarella
@ccarella.eth
Will hook up Stable diffusion to it soon and will write a few bespoke apps.
0 reply
0 recast
2 reactions
Jamie Dubs
@jamiew
how are you accessing it remotely? Love idea of wiring it up to my Obsidian; especially if I can use on the go
1 reply
0 recast
3 reactions
Ottis Ots
@ottis
Impressive, ccarella! 50 $RARE π
1 reply
0 recast
1 reaction
Taye π©π΅ π½βοΈ
@casedup
What you plan on doing with it? What made you do a homebrew?
1 reply
0 recast
2 reactions
Ben πͺ
@benersing
What's been the biggest learning in setting it up?
1 reply
0 recast
1 reaction
chris π©
@ckurdziel.eth
this is pretty awesome. is there a primer on how to set this up? and what kind of hardware do you need to do it?
1 reply
0 recast
2 reactions