I got my first taste of real computing from a guy in the next town who would buy dead PDP's from local McDonald's restaurants, fix them up, and sell them on.
He always said the hardest part wasn't replacing broken parts, it was that invariably the machines would always have soda spilled in them, no matter where they were stored. Cleaning the insides was his most time-consuming task.
One of the advantages of an Intel NUC is that it probably has a lot more horsepower than an old PDP/11. It's probably also a smaller target for wayward beverages.
I'd like to hear the story of why a McDonalds would need a PDP. Weren't they...expensive? They would be at the heart of university's IT department, or running a steel mill. Stuff like that. But until PCs came out, at a restaurant I can only picture registers and a dumb terminal hooked up to a distant mainframe.
The logistics improvements from early computers were massive. If you compare the equivalent price nowadays it's not worth it compared to other options, but going from paper to computing wasn't just an incremental improvement but allowed for entirely new abilities.
It wasn't like having an account to crunch all the numbers on staff full time was cheap. Especially when you'd have to have 3-4 accountants just to have 24 hour service, regardless of the work load
I spilled beer all over my laptop keyboard once, so I stripped it down and cleaned down the whole thing with isopropyl alcohol. It was a really annoying task, I can't imagine how annoying it would be with soda in a PDP.
I'm just trying to understand, I see three of those machines, inside a pretty nice rack, with the switches, routers, a keyboard, I would assume some fairly high quality hard drive / RAID type of thing etc. then the blog post claims "We run our Edge infrastructure on commodity hardware that costs us, ballpark, $1000/restaurant. ". Just the intel nuc I can find on newegg alone costs $349 each, with no memory or harddrive installed, retail. I know they are likely getting amazing volume discounts but it still seems amazing that whole stack could possibly be only $1000 each.
Caleb,
I'm working on a project with a similar architecture (on-prem services). I'm curious what your auth-pattern looks like as it's something we've struggled with. Mostly the balance between being convenient and being secure. It seems like each site needs an api key to access your cloud. Do you have an auth-pattern that would prevent the key from being readily available to an attacker who got access to the machine? Or do you kinda just say if they get access to the machine, then it's game over? Also with shoddy networks it is difficult to be confident that key-rotation will happen successfully. Happy to get your thoughts. Cheers.
Just curious (since I work in edge computing products) the deploy here seems rather complicated... What are your feelings on the matter and is there a market for simpler deployments?
I'd assume it's 3x$349 that they're referring to in the article, since most places these days already have some networking equipment (probably including a racked switch someplace) in order to provide guest wifi, connect POS terminals, etc. So that portion isn't directly tied to this effort.
This is Brian -- I wrote the article (I suspect I will be typing that a lot tonight). We do clusters with 3 nodes per restaurant, so at full scale (when we have rolled out to every restaurant in the chain) that will be ~6000 nodes, and growing by new stores * 3 going forward. This will support an estimated 100k IoT "things" of various types in the next year and a half to two years (estimate)
Hey Brian, I have been investigating deploying a very similar stack to what you guys are using now. How are you handling onsite load balancing? Is it a simple round robin type load balancer at the router level?
Also is Highlander open source? I don't see a link to it in the article.
We will open source highlander eventually, but it’s not quite there yet.
We use a VIP that the NUCs share... ie; one of the three will always have a VIP, and if it dies another NUC grabs it. This is a poor man’s load balancer in that sense, because we only have the NUC hardware onsite;
NUCs are surprisingly well-built little machines. I have one in my car, and I give them my absolute seal of approval. I've heard of people running little VMware clusters on them too.
It's actually suprising that people think they're not "normal" computers. I provisioned a few of them and they're little beasts. Most in use are I7, 16-32gb ram, Dual NVME. The Skulltrail nucs are extremely powerfull and on the lower end the atom based units are solid too. I also ran a nuc in my car for a while, mostly stumbling radio / wifi spectrum as I drove around, but it was perfect with its relatively low power requirements.
I have a custom piece of software that powers a display in the car, syncs and plays my music, and logs GPS. It also can manage audio and video recording and is intended to handle some future features[1] I have planned, so I wanted to over-spec rather than under-spec.
[1] It's the same software that runs my home automation system, so an inevitable feature is my car and home units interacting, I wrote half of a piece of navigation software before I stopped, and the entire interface is intended for voice control so I need to throw that in there at some point too. I intend to work on a CAN interface to connect to the car, but I've had a few roadblocks to getting started on that project.
Have you documented any of this or written on your home automation endeavors? I'm looking to "Tony Stark" my house up a bit (i.e. voice control, some touch enabled panes in the kitchen/office) but want a bit more control than something purpose built, off-the-shelf while also being a bit more insulated than Alexa or Echo. The NUC has been on my radar as a device that I may go with as the brains of the operation, and I'm just curious to read what design and implementation patterns others have taken.
I get asked to do more of a "write-up" a lot, but it's hard to figure out how to format or frame it. I have a fairly large pile of opinions about how one should go about doing this, but it's hard to concisely organize my thoughts on it into a coherent write-up.
The code is here: https://github.com/ocdtrekkie/HAController but I don't know if I'd really recommend others use it. The main perk to me is that it's designed around what I want and use (I tried a lot of alternative options before rolling my own), so unless you also really like Visual Basic code and have a brain ordered bizarrely similarly to mine, you may want to start somewhere else.
I'd love to chat about what I've learned along the way or what ideas you have (because I might borrow them!), if you want to chat elsewhere hit me up at inbox (at) jacobweisz (dot) com
I once left the reading lights on; car was supposed to have turned it off when the doors locked, but somehow didn't. I found myself calling AAA in the morning to get a jumpstart.
Also curious, does GP have some way to shutdown gracefully when the car turns off, and reboot fast when it turns on? You could potentially handle this with a laptop and some battery-settings but not aware of an external battery solution for the NUC.
I currently use a laptop power adapter for the cigarette lighter, which is only on when the car is. The NUC's BIOS is set to turn it on when the power comes on, but I power it off manually before turning off the car. This is not ideal.
There is an external battery solution for the NUC, and I bought it! http://www.mini-box.com/NUC-UPS But I haven't switched to it yet. The big upside is that it will gracefully shutdown my NUC after I power off the car. But if I use the cigarette lighter for power, it will lose the ability to turn itself on when power is present, because the batteries are always present. NUC UPS supports using a different power connector which can turn on the PC on ignition, but I haven't had it installed in my car as of yet as that's a little bit more work, and I'd need some help from someone who knows more about the car's electrical system.
As far as fast power-on, it's a pretty high end i5 NUC with an NVMe SSD. It boots pretty darn fast, and my software takes less than a second to load once the OS is up. The slowest part of boot is that I don't want my location history easily steal-able, so it's encrypted, and I have to key in the code to unlock the machine. (I'm looking at a security key or similar to replace this step in the future.)
As a note, I'm more or less specced out what's involved in a solar power setup on the roof of my car to independently power my computer all/most of the time when the car is off and otherwise charge a secondary battery off the car's inverter, but there's no reasonable or sane reason to do it. ...But I thought about it.
Thanks. Curious, where do you stash the NUC in your car? I imagine it's insulated and not exposed -- otherwise a hot day or a very cold day might kill it.
You don't want insulated and hidden, you want perforated and airy (while still being discreet).
The NUC-like hardware that goes into tanks have chassis consisting of a mesh cage to allow maximum airflow since the cabin temperature alone can exceed 100F.
Actually, it's pretty important for the NUC to get good airflow so it doesn't overheat; insulating it would be a bad idea! I haven't had any temperature-based issues with it, in part because most of the time the car is running I am in it, so I keep the temperature at temperatures I am comfortable in. And the NUC's temperature range is pretty wide as well, which is good, because weather here ranges from -10 F to 110 F.
Probably the only real concern I'm afraid of is condensation when heating up the car on a cold day, but it hasn't been an issue so far, perhaps because the computer is not near a window.
Since I'm not a hardware guy, I try to avoid assembling my own gadgetry as much as possible on this project. I use consumer grade home automation modules, standard computer parts where possible. (Everything in my car is largely interconnected with USB.) And generally, experimenting with the NUC UPS aside, I try to avoid parts that may be hard to replace. Which is to say, I don't have time to build, test, and fix a random Arduino gadget. ;)
I've been planning to make some custom-cut USB cables and a 3D printed part just to make the setup of my display a lot cleaner... and I haven't done either of those and it's been a year or so since I planned to.
Nice machines, fast enough, VESA mount to the back of a monitor, and easy to upgrade. They've basically replaced Mac minis and iMacs in our labs. Have not had one bit of trouble with any of them.
they've kind of a pain in the butt for us... For example, it would be nice for them to have remote admin, more than one NIC, not freak out when the HDMI is not plugged in, etc.
Interesting as it seemed that smarter edge/iot nodes would be like ARM/RaspPi, yet it seems it may get even as high as (or just straight to) Intel nuc.
This is Brian -- I wrote the article. For what its worth, we considered trying to run our clusters on an array of cell phones since they would natively support connectivity fallback and pack a real effecient resource punch in a small footprint. One of our engineers came up with that idea. With the challenges compiling on ARM and the relative ease of moving forward with x86, we went that way to start. We tried really hard to balance the "ideal" with the short term "MVP", and I think we landed in a decent spot with the NUCs. They give us enough power to run the things we need in the short term, and we have some capacity for horizontal scale in the future as our needs increase. We tried to think "cloud native" at the edge as much as we could without truly being a cloud.
Raspi cannot be used for a semi serious server application due to microsd card write wear issues. All other methods of attaching external storage by USB bus to an rpi are not reliable or suitable for this sort of purpose. Nowhere near the longevity of something with a native sata or m.2 SSD interface and a real SSD.
There's also the added complication of the fact it's an ARM based device so in some cases you might run into compatability issues, or testing your container locally becomes problematic.
"Raspi cannot be used for a semi serious server application due to microsd card write wear issues."
As an end user, I run a personal authoritative DNS server that has small RAM requirements. The RPi (or other SBC) boots to a mfs mounted root, then mounts all directories as tmpfs. Then I remove the SD card.1 As such, the logs for this server, which are automatically rotated and do not exceed 5M in total, are written to RAM.
1 I only use the SD card to boot. The only files on the card are a bootloader, a bootloader config and two kernels, each with an embedded filesystem. If updates are necessary, I make them to one of the kernels at a time. The other is the backup. The bootloader and bootloader config lets me specify which kernel to boot.
Don't most modern SD cards contain some wear leveling logic?
I did a bunch of research on this a while back and the conclusion I found was: yes, but buy name-brand better rated cards to avoid cheapo cards that do not do wear leveling or do it very badly.
It's nowhere near as good as SSD drives, but it's better than just a naked flash chip... unless I'm wrong.
Hey if you see this can you answer one more question:
How much abuse was your high availability product delivering to these cards in terms of writes? Was it something like a video recorder, database, cryptocurrency, or some other application that did large amounts of write I/O?
I ask because we're about to ship something that uses SD cards, but the I/O is very low. It's a network appliance and doesn't do anything locally that is high write throughput.
The Linux kernel in raspbian treats the SD card like a normal block device. I'm not aware of any special optimization for write wear leveling at the OS level, or on the board's sd card controller.
SD card wear-levelling is usually supposed to be handled on the controller inside the SD card. SD cards aren't like having raw access to flash, you basically write over SPI and the controller in the SD card decides where to actually put it.
This talk about SD cards is really great and explains some of this:
Working with RaspPi and Docker has been surprisingly difficult. Docker's support for multi-arch is poor at this point, and also applications that are I/O heavy seem to bottleneck at the network.
Spoiler: It's a stack of Intel NUCs.