Hacker News new | past | comments | ask | show | jobs | submit login

I've worked in a fast food restaurant and one of the key pieces of information we used was something called a "drop chart".

This was basically a printed out excel sheet with predicted amounts of food to keep ready to serve.

i.e. between 5pm-6pm have 12 chicken breasts already cooked. People follow these charts to know how much stock to pre-cook before the hour basically.

These predictions are usually based on previous years sales for that same day/time + growth + whatever other inputs to the sheet.

It sounds like what they're trying to do is run ML models and make really fast predictions in real time to react to surges of customers. To me this sounds a lot smarter than the approach I worked with.

From my experience when the numbers were off you either generate a lot of waste product or you make customers wait really long.

I imagine the inputs to the ML model are probably something like real time sales data + whatever else.

I think you'd want edge compute here not just for latency but for availability.

You'd want to both run these models fast and make sure even if the crappy internet connection of the restaurant dies you can keep providing these numbers.




ML implies lots of training data. That gets done centrally as it would need to be done in aggregate.

You don't have low-latency training of a model and use that same model in real-time.

And in your example (admittedly small scale) 12 items of data an hour is not exactly high-data-rate, not enough to justify racks of machines.


You’re correct, training is centralized but the devices that use it to function are operated at the edge.


Of course, thats how ML works, central training of a model with use of that model at the edges.

So how often do you distribute a new model for controlling fries?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: