Content pfp
Content
@
0 reply
0 recast
0 reaction

Timi pfp
Timi
@timigod.eth
Increasingly capable local models (like Llama 3.2 1b & 3b) make me very optimistic about inference at the edge - mostly because of cost. However, I worry about how limited they’ll be if Apple continues to disallow any kind of meaningful background processing on iOS.
1 reply
0 recast
0 reaction

Timi pfp
Timi
@timigod.eth
I’m playing around with a NotebookLM podcast generator type knock off and even if there was a local tts model that was lightweight enough, the user would still have to wait ~3-5 mins without closing the app for a podcast to be generated.
2 replies
0 recast
0 reaction

Timi pfp
Timi
@timigod.eth
The solution is probably just MUCH faster local models. Apple has never allowed meaningful background processing and yet, apps do a lot of cool, previously considered time consuming things.
1 reply
0 recast
0 reaction

Timi pfp
Timi
@timigod.eth
But they’ll always have an “unfair”advantage for their own offerings.
0 reply
0 recast
0 reaction