Hacker News new | past | comments | ask | show | jobs | submit login

On a related note, Chick-fil-A using Deep Learning models for object detection with MXNet to track how long fries have been waiting: https://www.youtube.com/watch?v=3Uuq_cX8b1M



This seems like such an over-engineered solution.

It would've been much simpler to have a plate with multiple pressure sensors and a multi-color LED per spot. When you set the fries down on the sensor it activates and the LED turns green. After X amount of time the LED can turn yellow, meaning that it's becoming stale. Finally, after X+N time has passed and it's no longer fresh it can turn red. Removing the fries turns it off. Aside from being way cheaper, I'd conjecture that this would be much more reliable. I'm pretty sure I could get a prototype of this up and running over the course of a weekend or two.

If you wanted to get really fancy I guess you could also track the room temperature and moisture levels and use that to get a better guess of how long a group of fries will remain fresh. Although I don't know if environmental factors like these have enough impact on fry freshness to be worth taking into consideration.

Anyway, it looks like they were just doing this for fun and learning, so I guess it doesn't really matter.


This was a good learning experience for our engineers at our Innovation Center at Georgia Tech in Atlanta. It may bear fruit in the form of a useful solution in the future, though. What is unique to Chick-fil-A is 'volume'. We do a lot of sales in our restaurants, so anything we can do to try and make our team members lives easier is important to us. We want them to enjoy their jobs and we want to do the best we can to consistently create high quality food experiences for our customers. Our teams in restaurants are the heroes, but we are trying to use technology to help them do what they do. <thumbs up>


The computer-vision based version means you just take the first fry off the rack. Yours means the employee needs to assess the LEDs each time (which may only take a second, but is repeated hundreds if not thousands of times a day), reach more for the yellow one in the back, etc. That's a lot more sensors to break down than a single camera, too.


Doesn't look like they actually use that. It seems like more of a research project that they wanted to see if it was feasible.


Or more like a fun summer intern project, really. I'm sure it was a cool learning experience but things like that are rarely ever deployed.


My life is complete.


HA!!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: