Content pfp
Content
@
0 reply
0 recast
0 reaction

ccarella pfp
ccarella
@ccarella.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
6 replies
8 recasts
78 reactions

ccarella pfp
ccarella
@ccarella.eth
Will hook up Stable diffusion to it soon and will write a few bespoke apps.
0 reply
0 recast
2 reactions

Jamie Dubs pfp
Jamie Dubs
@jamiew
how are you accessing it remotely? Love idea of wiring it up to my Obsidian; especially if I can use on the go
1 reply
0 recast
3 reactions

Ottis Ots pfp
Ottis Ots
@ottis
Impressive, ccarella! 50 $RARE πŸ’Ž
1 reply
0 recast
1 reaction

Taye πŸŽ©πŸ”΅ πŸ‘½β›οΈ pfp
Taye πŸŽ©πŸ”΅ πŸ‘½β›οΈ
@casedup
What you plan on doing with it? What made you do a homebrew?
1 reply
0 recast
2 reactions

Ben  πŸŸͺ pfp
Ben πŸŸͺ
@benersing
What's been the biggest learning in setting it up?
1 reply
0 recast
1 reaction

chris 🎩 pfp
chris 🎩
@ckurdziel.eth
this is pretty awesome. is there a primer on how to set this up? and what kind of hardware do you need to do it?
1 reply
0 recast
2 reactions