Content pfp
Content
@
0 reply
0 recast
0 reaction

Timi pfp
Timi
@timigod.eth
Increasingly capable local models (like Llama 3.2 1b & 3b) make me very optimistic about inference at the edge - mostly because of cost. However, I worry about how limited they’ll be if Apple continues to disallow any kind of meaningful background processing on iOS.
1 reply
0 recast
0 reaction

Timi pfp
Timi
@timigod.eth
I’m playing around with a NotebookLM podcast generator type knock off and even if there was a local tts model that was lightweight enough, the user would still have to wait ~3-5 mins without closing the app for a podcast to be generated.
2 replies
0 recast
0 reaction

Alberto Ornaghi pfp
Alberto Ornaghi
@alor
isn't this an activity to be performed on laptops instead of phones?
1 reply
0 recast
0 reaction

Timi pfp
Timi
@timigod.eth
Why?
1 reply
0 recast
0 reaction

Alberto Ornaghi pfp
Alberto Ornaghi
@alor
Because it’s a CPU intensive task and the best place to do it is in laptops or desktops. Would you do video encoding on a phone?
1 reply
0 recast
0 reaction

Timi pfp
Timi
@timigod.eth
*GPU intensive, you mean? I’m pretty sure all mobile video editing tools like iMovie, LumaFusion etc do some video encoding. Things always start out as “being better/only possible on desktop”, but they always eventually make it to mobile.
1 reply
0 recast
0 reaction