Hacker News new | past | comments | ask | show | jobs | submit login

Can anyone explain what the actual problem is they're solving using 'edge computing'?

I get the feeling this is a completely unnecessary use of technology.




I've worked in a fast food restaurant and one of the key pieces of information we used was something called a "drop chart".

This was basically a printed out excel sheet with predicted amounts of food to keep ready to serve.

i.e. between 5pm-6pm have 12 chicken breasts already cooked. People follow these charts to know how much stock to pre-cook before the hour basically.

These predictions are usually based on previous years sales for that same day/time + growth + whatever other inputs to the sheet.

It sounds like what they're trying to do is run ML models and make really fast predictions in real time to react to surges of customers. To me this sounds a lot smarter than the approach I worked with.

From my experience when the numbers were off you either generate a lot of waste product or you make customers wait really long.

I imagine the inputs to the ML model are probably something like real time sales data + whatever else.

I think you'd want edge compute here not just for latency but for availability.

You'd want to both run these models fast and make sure even if the crappy internet connection of the restaurant dies you can keep providing these numbers.


ML implies lots of training data. That gets done centrally as it would need to be done in aggregate.

You don't have low-latency training of a model and use that same model in real-time.

And in your example (admittedly small scale) 12 items of data an hour is not exactly high-data-rate, not enough to justify racks of machines.


You’re correct, training is centralized but the devices that use it to function are operated at the edge.


Of course, thats how ML works, central training of a model with use of that model at the edges.

So how often do you distribute a new model for controlling fries?


Here's the abbreviated reason;

Internet connectivity for restaurants stink.

So we put our compute at the edge.

Trust me... as an SRE I would much prefer we shove it in the cloud and call it a day, but alas... no such luck.


Yeah, but what compute? what are they computing?

The driving need for edge computing is low-latency. There just isn't the need for it in a restaurant.

I do hope they're not doing real-time control of kitchen equipment via something in a container, containers in Linux are nowhere near deterministic enough.

This strikes me as someone who is familiar with only web technologies but unfamiliar with control systems attempting to create a control system.... completely inappropriate tools for the job.


Could you define what is not deterministic about containerized computing?

Examples of things we are controlling are timers for food, cameras that can recognize and track food items, drying machines that will automatically trigger and fry fries.


Oh! took me a second to realize it was a typo... FRYing machines, not DRYing machines ;-)


Linux is general is not a real-time system, in that you cannot get deterministic response time from it.

For example I'm guessing it doesn't matter if one of your timers are a second late due to Linux deciding to swap a process out or prioritise some upgrade check but imagine if your time constraints were a lot tighter.

Similarly if you're attempting to track food on a conveyor belt and Linux decides to prioritise the file indexer on your filesystem... oops, a lettuce has been thrown on the floor.

Real-time operating systems exist for a reason.


Our tolerance requirements for synchronicity are broad enough that we can tolerate blips like this... at the end of the day we are automating away some simple human interactions (ie; fry the fries or track the food), we aren’t performing surgery with these systems.

So far we’ve been satisfied with Linux (Ubuntu 18.04 to be exact) and it’s overall capabilities.

Also, don’t forget that cost at scale is a big factor... doing things perfectly but expensively is not profitable at 2k locations.


The point about real-time systems is not the your tolerance of 'blips' its about analysis of your worst-case timing requirements. This is what real-time control systems are all about.

Sure, you may be able to tolerate a 1 second 'blip'... how about ten seconds? a minute? do you know for sure how long Linux may put your process on hold for? Linux provides no guarantees at all.

It seems to me that what you are really creating is a traditional control system with web technologies and using the hype-of-the-day (i.e. edge computing) to justify it... with the corresponding increase in overhead and cycles.

This does not seem like a good solution to me, the only novelty here is the inappropriate use of technologies. i.e. you've managed to dig a tunnel with a spoon.


Seems like a very appropriate use of technology to me?

If they can get "good enough" results using Linux and running with technologies that the average developer is more likely to know, is it worth the investment in running a RTOS? Especially given running a RTOS will make any higher level app integration more difficult?

The benefit of running with Linux outweighs the risk of "lettuce on the floor", imo. Yes, there's probably overhead in terms of clock cycles, but there's also probably dramatically less overhead in terms of developer resources.. and when you're running off commodity hardware you can guess which overhead will cost more!


It would also be 'cheaper' to build a bridge using lego....

Correctness of the solution is important, otherwise we're just playing at being engineers.


To quote you:

> At some point, you will realise that your job is not to create an idol to the software-engineering gods but rather to have a working product.

> Does this product work? if so, its doing its job.

https://news.ycombinator.com/item?id=17824782


Have you worked with k8s yet? A lot of the things you bring up are addressed if k8s is used properly.


K8s has nothing to say about schedulability or real-time performance, its just not what it does. K8s is simply a way to coordinate containers, nothing to do with scheduling processes on this level.

Something like RTLinux however, would. (https://en.wikipedia.org/wiki/RTLinux)


Shameless plug but...

They could be using VeloCloud and many of those internet connectivity problems would go away. Use multiple internet links with cellular along with dynamic multipath packet optimization


"Edge" computing seems like a term invented by people who grew up with ubiquitous cloud computing. Go back in time 10 years or so, and all this automation would have been onsite by default.

You cannot seriously tell me that creating a predictive model for cooking french fries requires a lot of computing power.


Exactly.

And the results would have been no different.

This is just hype-driven development (HDD).


They said in the article without getting super-specific that they have low-latency / high-throughput / time sensitive data processing needs in each restaurant. It sounds like they have a whole bunch of sensors tracking their inventory and food preparation pipeline. Other comments here link to other blog posts by them with more details about french fry done-ness deep learning models, for example (wacky but cool).

If you had to go across the internet to the cloud at such a high rate from each restaurant, this would probably come with pretty high failure rates and levels of flakiness. It makes sense to me to process the local data firehose in-house and keep the rest of the common high-level stuff in the cloud.


I suppose I'm struggling to see how much of a requirement for low-latency data processing exists in a restaurant.

There just isn't that much data there unless they're attempting to do real-time control of equipment (in which case, just use a regular control system).

I can imagine a requirement for logging potentially, but thats not a low-latency requirement and the use-case for edge-computing is exactly that, low-latency.

Still not seeing anything here but a dramatic overuse of technology I'm afraid.


I think you are right that this blog post doesn't lay out a specific use case to justify this setup in a restaurant- that does not appear to be the goal of the post. An exploration of problems and requirements is probably what you're looking for.

They did drop a few hints, though. They are just hints, and going by what's in the blog post, a bit of imagination is required to fill in some of the gaps. However, if you do that, I can envision a few possible needs for low-latency edge computing that are interesting and forward-looking:

- Kitchen automation. Sensors that monitor food being cooked, and help implement a pipeline to ensure quality is consistent. (example here is that neural net monitoring the staleness of fries).

- Inventory automation. Monitor and automatically restock items. Track purchases and ingredient usage in real time.

- Data analytics. Collect detailed time-series data about food quality, facilities maintenance, foot traffic, noise levels, security, etc.

These aren't standard restaurant problems. However, it's an interesting approach to scaling data analysis and automation in the physical world. All of these use cases assume the need for "real-time" data, and they assume that such a thing has business value (which is where I think your critique is coming from).

I'm not affiliated with Chick Fil-A, so I don't know how correct any of my speculation is, but that's my takeaway from reading the article.


It’s more about network outages than latency specifically... although in several remote locations latency is permanently low due to slow providers.


Credit card transactions, mobile orders, timer synchronization, order receiving (tablets etc), iot devices (cameras, cooking devices) and other things planned for the future.


Ok, lets go thru that list:

- Credit-card transactions don't have low-latency requirements, its 'nice' if they're quick, same as everything else. The bottleneck here is entirely dependent on your ISP tho. No 'edge-computing' here.

- Mobile orders. This will necessarily go thru to a central server. This is traditional client-server stuff. Again, no 'edge-compute'.

- Order receiving. Simple local data entry. Again, no 'edge-compute'.

- IoT devices. Hopefully these have local control systems without the server being in the loop. Control systems are not 'edge-compute' either.

'Edge-compute' is generation of knowledge at the edge rather than shipping raw data to a central server. This reduces required bandwidth.

What in your system takes a high rate of data and generates a low-rate of data for transfer to a server for further use. I see no analysis of raw data into a more processed form, this is simply traditional data entry and CRUD activites.


But again, latency of what?

Requirement for low-latency implies a use of data that is time-sensitive.

What time-sensitive data is there in a single chicken restaurant??


>In a recent post, we shared about how we do bare metal clustering for Kubernetes on-the-fly at the Edge in our restaurants. One of the most common (and best) questions we got was “but why?”. In the post we will answer that question. [emphasis mine]

Luckily there's a whole article on that topic. I try to avoid "Did you read the article?" comments, but the first three sentences in the article tell you that this is the question they're answering.


No, there is some hand-waving about "kitchen equipment" and "low latency".

No business case for avoiding more traditional architecture.


This is the relevant pair of paragraphs from the article:

>As a simple example, image a forecasting model that attempts to predict how many Waffle Fries (or replace with your favorite Chick-fil-A product) should be cooked over every minute of the day. The forecast is created by an analytics process running in the cloud that uses transaction-level sales data from many restaurants. This forecast can most certainly be produced with a little work. Unfortunately, it is not accurate enough to actually drive food production. Sales in Chick-fil-A restaurants are prone to many traffic spikes and are significantly affected by local events (traffic, sports, weather, etc.).

>However, if we were to collect data from our point-of-sale system’s keystrokes in real-time to understand current demand, add data from the fryers about work in progress inventory, and then micro-adjust the initial forecast in-restaurant, we would be able to get a much more accurate picture of what we should cook at any given moment. This data can then be used to give a much more intelligent display to a restaurant team member that is responsible for cooking fries (for example), or perhaps to drive cooking automation in the future.


Hey this is Brian and I wrote the article. Personally I don't love "hand waving" so I'd love to answer your question. Its a personal mission now :)

Can you help me understand what you mean by "more traditional architecture"?

I'd be happy to share more about why we chose k8s vs alternatives.


Hey Brian, did you write the article? ;-)

I've enjoyed reading (all) your comments here. Just wanted to chime in and say I have a great appreciation for what CFA does and have had great experiences with the app and in general the entire ordering process at CFA. I've been, in fact, quite impressed. Nice to read about the stuff in the background that helps make that happen.

Doesn't hurt that every employee I've interacted with at CFA has been courteous, kind, and respectful.


Why pretend to be making a disinterested comment about how "kind and courteous" employees are (which is totally irrelevant) when you're actually motivated by sympathy to the company on the basis of their regressive, hate-fueled ideology? This kind of manipulation doesn't speak well of your fellow extremists.


I'd heard a few years ago about a random company called Chic-fil-a and that their founders were Christians and one of them said some things defending marriage that stirred up controversy. Then promptly forgot about them.

Then a couple years ago, one showed up near my work, and I heard they had good chicken. The first thing I noticed about CFA was how their employees acted. I was impressed! Then, I downloaded and installed the app and had a fantastic experience. I was impressed! And their food is great!

Certainly, I agree with the ideology of their founders. However, that is independent of the fact that that I've been impressed! When I go to CFA, I wonder HOW they get me through the line so fast. I appreciate how they've trained their employees in good-old-fashioned southern hospitality. As far as I've seen, at this location, they hire friendly people who care about taking care of the customer.

So, back on topic... how DO they get me through the line so fast?


We have frequent internet outages and still need to run compute loads at the edge.


> compute loads

What are you computing? This is what everyone is wondering.


Hey this is Brian and I wrote the article. Great question.

1) MQTT -- we have a lot of cases where we want to share data between physical or "software" devices in-restaurant, without internet connections. MQTT is our primary channel for sharing messages between things, and for collecting data in general to be exfiltrated to the cloud.

2) "Brain" apps -- these are "smart" applications that collect data from MQTT topics and make decisions about what should be done for a given restaurant process. Today, the reality is we do not automatically cook anything. We do, however, have some restaurants that have more intelligent screens to help them know what to cook at any given moment based off of forecasting models that use the data we collect from many different things. These generally run at the edge to preserve their function in cases where there is high latency or bad WAN connections.

3) In the past we had a POS Server in the restaurant. It was a single point of failure for things like drive thru order taking with iPads and mobile orders. The move to K8s lets us run a microservice that can intake mobile orders on a more resilient infrastructure. To be clear, we have not made this shift just yet, but its something we are working on that will ultimately be transparent to customers but that will make their ordering experience more reliable. We have many cases like this.

Overall, our workloads aren't "heavy" which is why we did an array of three moderately sized commodity compute devices. We wanted a smart, sensible ROI on the hardware. We do have enough business-important apps that need to run in a low-latency high-availability environment to push us out of just cloud (internet dependent) and to the Edge. In the future, maybe this can change and we can use cloud only. It seems Edge is an industry trend though, with many workloads moving towards their consumers (a lot of gaming, but a lot of retail as well).

Great question and I hope this helped -- if not just let me know.


Good answer, seems we finally got to the meat of the product (no pun intended).

So the current workload doesn't do any of the fancy stuff but is just typical POS stuff with a 'smart' display for orders driven by a process monitoring a few MQTT sources.

You've just put in enough beef to allow for future expansion.

I can honestly say that answer provides more information than the whole original article.

Thank you.


There is no real edge-compute loads here, just traditional CRUD data entry type systems.

Riding the hype-wave methinks....


Iot device control, such as timers for food, and cameras tracking food (to name a few). These also output streams of data that is interesting to us, such that we want to exfiltrate them up to the cloud.

I’m not a fan of hype... k8s is one of the few hyped technologies that has delivered on its promises.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: