Kyle Mathews pfp
Kyle Mathews
@kam
LLMs are starting to feel like a new foundational programming tool — like OSs, compilers or image/video codecs. A basic industrial capability. I imagine soon there'll be a stable-diffusion like OSS distributions that'll be tiny, highly optimized, and easy for applications to embed and ship fine-tunings.
2 replies
0 recast
0 reaction

Boris Mann pfp
Boris Mann
@boris
Yeah I’m looking at Dalai for running LLaMa and Alpaca on your local machine https://cocktailpeanut.github.io/dalai/#/ I’ve got enough space to run even the largest models
1 reply
0 recast
0 reaction

Kyle Mathews pfp
Kyle Mathews
@kam
oh hmmm... maybe I'll try using that for my project. I've hooked it up to the OpenAI but don't see why this wouldn't work instead
1 reply
0 recast
0 reaction

Kyle Mathews pfp
Kyle Mathews
@kam
https://warpcast.com/kam/0xaba23a Planning to generate a bunch of unit tests anyways against my prompt pseudocode so will be easy to compare :rubshands:
1 reply
0 recast
0 reaction