Content pfp
Content
@
0 reply
0 recast
0 reaction

ccarella pfp
ccarella
@ccarella.eth
I am now running my own local LLM server at home via Ollama. Playing with models but liking llama2 with all the AI safety features turned off. It's connected to my Obsidian knowledge base but want to augment (RAG) it a lot more. One custom gpt so far around Product Desigb. Can access via mobile when out of the home.
11 replies
3 recasts
64 reactions

ccarella pfp
ccarella
@ccarella.eth
Will hook up Stable diffusion to it soon and will write a few bespoke apps.
0 reply
0 recast
2 reactions