Content
@
0 reply
0 recast
0 reaction
Timi
@timigod.eth
Increasingly capable local models (like Llama 3.2 1b & 3b) make me very optimistic about inference at the edge - mostly because of cost. However, I worry about how limited they’ll be if Apple continues to disallow any kind of meaningful background processing on iOS.
1 reply
0 recast
0 reaction
Timi
@timigod.eth
I’m playing around with a NotebookLM podcast generator type knock off and even if there was a local tts model that was lightweight enough, the user would still have to wait ~3-5 mins without closing the app for a podcast to be generated.
2 replies
0 recast
0 reaction
Alberto Ornaghi
@alor
isn't this an activity to be performed on laptops instead of phones?
1 reply
0 recast
0 reaction
Timi
@timigod.eth
Why?
1 reply
0 recast
0 reaction
Alberto Ornaghi
@alor
Because it’s a CPU intensive task and the best place to do it is in laptops or desktops. Would you do video encoding on a phone?
1 reply
0 recast
0 reaction
Timi
@timigod.eth
*GPU intensive, you mean? I’m pretty sure all mobile video editing tools like iMovie, LumaFusion etc do some video encoding. Things always start out as “being better/only possible on desktop”, but they always eventually make it to mobile.
1 reply
0 recast
0 reaction
Alberto Ornaghi
@alor
GPU or CPU is the same, since mobile have a single SoC… iMovie on mobile is used to create short videos in the size of seconds. I don’t think anyone will think of a mobile as a go to device to encode a 1 hour long video. You need power to do those tasks and mobile OS are optimized for battery life. This is why you don’t have background tasks that will drain it.
1 reply
0 recast
0 reaction