Hacker News new | past | comments | ask | show | jobs | submit login

(feel free to correct me if I am wrong), but my main gripe against mobile ML frameworks (Android too) is they require the app to embed the ML model with the app (as opposed to the OS storing the model like a shared library).

People with limited storage on low-end device don't have enough memory to store the apps.




CoreML has various models already built-in, although as black boxes that accomplish some task like OCR or rectangle detection. There's also a "feature print" model which I believe are intended to be used as hard-coded features for simple ML tasks. In either case I strongly suspect that when you use them, they're not being embedded in the app.

Another thing to consider is that you don't have to embed the model in your app; at least in CoreML you can download (and update) the model weights over the network.


People have lower HD size (think 64GB iPhone X/iPhone SE) on unreliable internet networks. Downloading five 200MB models to perform five layers of processing (OCR, rm background, object detection, etc.) would take hours and consume too much cellular data.

Sending a 10kb image to the cloud for processing is much faster and user friendly.


On Android, you can choose to include the ML model dependency "bundled into your app in compile time" or shipped through Google Play Services. Saves many megabytes. Drawback there: if the device doesn't have Play Services, nothing works. Also, on first download it takes some seconds to work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: