Hacker News new | past | comments | ask | show | jobs | submit login

ML implies lots of training data. That gets done centrally as it would need to be done in aggregate.

You don't have low-latency training of a model and use that same model in real-time.

And in your example (admittedly small scale) 12 items of data an hour is not exactly high-data-rate, not enough to justify racks of machines.




You’re correct, training is centralized but the devices that use it to function are operated at the edge.


Of course, thats how ML works, central training of a model with use of that model at the edges.

So how often do you distribute a new model for controlling fries?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: