Hacker News new | past | comments | ask | show | jobs | submit login
Nerves – Craft and deploy bulletproof embedded software in Elixir (nerves-project.org)
270 points by punnerud on April 24, 2018 | hide | past | favorite | 74 comments



I think Nerves is cool because it really lowers the conceptual overhead for the cohort of web developers using Elixir to hack on hardware projects. There was a whole track on Nerves at ElixirConf in Bellevue last year, and I saw a ton of people who'd never done anything with hardware get really excited about it.

The keynote by the Nerves creator Justin Schneck was particularly good.

https://www.youtube.com/watch?v=bd_EIWU9GzQ


Heh, when I think of embedded code I don't think of 12 MB. That's not to say this isn't useful. I usually try to size the hardware to the task. Sometimes the task calls for a little 8-bit micro, sometimes analog-only.

When the job calls for a heavy linux-running processor, this seems like it could be interesting, as long as I can dig out all the details of what it's actually doing. The chips I use usually have an errata document 20+ pages long, and we usually find some new bugs to add to it during development. It's hard to code around hardware bugs period, and that's in a language hardware guys know, like C or assembly. Debugging hardware code written in Erlang sounds... challenging.


> Heh, when I think of embedded code I don't think of 12 MB.

ARM processors are so cheap these days that I think your idea of “embedded” is a bit stagnant or too focused in one field. I’ve built and shipped a product where it had an ARM processor and ample flash (among a bunch of other components) packaged the size of a key fob and the BOM cost is way less than I used to pay for Arduino or other micro controllers.


It's not that cheap, having recently gone through the process of looking at a hardware refresh of one our devices, anything that runs Linux ended up too expensive as it really needs something with a MMU and more RAM. Not to mention boot times are an issue. Real Time is trickier, also if low power is a issue, then it's really a bad idea.

Rust is kind of interesting in this space, but a little bleeding edge, will likely look at adopting it later.

Ended up picking an Cortex-M3 and using FreeRTOS. With a C / Lua mix. Not because I particularly prefer them, but it requires the least compromises. Also writing robust embedded C code is pretty well understood with many common patterns. Still, it requires being more diligent and disciplined than many other languages.


That's very much dependent on volume. $0.50 @ 1M units = $500,000k.

You'll always see the crunch at high volume it's just small to medium volume where your options open up. Back when I was in the smartphone business we'd easily see 10M+ volume on a single device and the associated BoM was tuned to a tenth of a penny.


Even at quantities of 1M, you barely would get ARM processors for $0.50 each, let alone entire BOM cost of a device.


> and the BOM cost is way less than I used to pay for Arduino

one would hope so, arduino is a retail packing of other chips into a kit.


Read what you quoted, let alone what I said fully, and you’ll understand how your comment is of questionable value. I clearly stated the entire BOM cost (which includes all other chips needed to make the device function), so your point is moot at best.


> Debugging hardware code written in Erlang sounds... challenging.

Depending on which part of the hardware code you need to debug, Erlang could make it easier; once you get the erlang shell running, you have hot code loading, some pretty extensive debug hooks, and nice bitmatching syntax. If the hardware code you need to debug is in the chain of getting to an erlang shell (getting the system running to the point where you can have a serial terminal and boot an OS), I don't think Erlang makes much of a difference -- projects big enough to run Nerves are likely going to run a full OS kernel, and Nerves is using a Linux kernel, once you can boot a Linux kernel, you've probably done a lot of the hard work.


Straight Erlang is not hard to debug. Functional language, little shared state, and much behavior in apps (for the Erlang geeks: pun intended) is based on message passing between processes, which is very well defined. That said, it gets much more complicated if you are using ETS or Mnesia (which builds on ETS), because now you have shared state, and that state has a rich API with lots of things that could be happening.


ETS interactions can be modeled as sending calls to a database server (but faster), you can use the debugging hooks to list all the calls to ets too.

My recommendation is to not do database reads and writes willy nilly all over the place, but to actually send sensible messages to a real server process, that does the interfacing with the database. Sometimes, it's useful to cheat and read ets directly, but it's really nice for writes to go through a server process, because the server process's mailbox provides an ordering for your writes, and you get to skip all the complexity of overlapping writes. In an mnesia system, that means writes should go to only one of the peers. (If you can divy up the writes to different servers based on the keys or something, then you can also divy up the servers onto different nodes).

All that said, distributed systems give rise to emergent behavior, but that's true if you program them in erlang, c, or assembly; it's just a little easier to get to crazy big systems in erlang. :)


Agreed, on all points, and that's basically what I do usually. I typically have a gen_server with a behavior defined data access and update message API, then that gen_server is the only process that can access a Mnesia table. I've seen a lot of code that doesn't do that though, and you still need to debug the gen_server if something goes wrong. If you use the gen_server approach though it is much easier to debug.


> when I think of embedded code I don't think of 12 MB.

That's one thing for creating a one off or prototype, but when a company creates millions of devices, the BoM becomes more expensive than developing a C version of the code to run in <1MB flash.


These days embedded is anything from multi-core systems with gigabytes OF RAM to tiny SoCs with barely a core and a few kilobytes of RAM


Here's an example posted on here yesterday fixing a Barcode Scanner using Elixir and Nerves:

https://news.ycombinator.com/item?id=16902628


It inspired me to do this post. Something that useful could not be hidden in the “dark”.


I've been following some of the interesting Erlang stuff for distributed embedded systems. A similar project which caught my eye is GRiSP https://www.grisp.org/ Basically the BEAM VM ported to run directly on bare metal using the RTEMS RTOS (Think of it as a library that allows you to run a unix process on bare metal).


I'm one of the developers on the GRiSP project.

Our main difference is as you say, that we combine Erlang with RTEMS to make the Erlang VM essentially be the kernel (no Linux running underneath). This let's us use Erlang's inherent soft real-time properties more reliably and allow us to do more hard real-time things.

Our low-level drivers for accessing hardware are written in C, and high level drivers are written in Erlang (for development speed, readability, testability, line count etc.) but can of course be written in C if you need the performance.

I'd be happy to take questions about the project.


Project looks very interesting! What hardware is supported? I've seen you product your own prototype board, but what about some off the shelf opensource openhardware solutions like BeagleBone Black or NextThing's CHIP?


RTEMS supports all 32bit hardware architectures. We also run it on a PowerPC controler for a customer project, but it also runs on Sparc, Mips all Arm and many other architectures.

We don’t use the MMU so we don’t need it, some embedded chips have even small MMUs that allows it running Linux size OS but since they have very small translation lookaside buffers it needs to traverse the in RAM page table very often and performance is quite poor then


Thanks!

Right now only the GRiSP board is officially supported, although the project is made in such a way that new platforms should be easy to add.

We're deliberately focusing on smaller platforms than the ones you mentioned. If a small Linux kernel can fit together with Erlang, you don't really need the RTEMS combo as much. That being said, we're happy to receive any contributions for other platforms, even larger ones!


What makes elixir/erlang good fit for rpi? When compared to let's say rust or lua. Serious question, no offense.


There's a few reasons to use nerves,

1. It has a good system to hot load the code onto the pi and immediately start running your new code making deploys more efficient than other compiled languages.

2. Erlang uses processes which are very resilient and a good fit for projects that require constant streams of data & lots of connections which can be relevant in IOT. Basically the jobs restart themselves in a robust fashion.

3. Erlang runs on beam which is a well established VM that has years of development behind it making it a robust core for your project compared to some other operating systems that you may load onto your RPI. I don't think this is the biggest reason, but it is one.

The hot reloading & concurrency model are my favorites & elixir is just fun to build in as well. I'm already using it for web so to bring it into IOT makes it natural for a web/IOT mixed project.


Since parent ask for a comparison with Lua and the Pi: I'm running a digital signage service (https://info-beamer.com/hosted) that uses a custom minimal Linux (~35MB) on the Pi and a program written in C that is scriptable in Lua. One of the key features is hot reloading of code (and visual assets). Thanks to the flexibility and robustness of Lua that works incredibly well: You can even push new code to the central website using git, it instantly deploys it to connected devices and they reload their code. If done correctly you can update the content and logic your displays without any interruption. The C code is pretty robust for a few years now and never crashes or leaks memory.


That's really fun! Back in another lifetime (2006) I co-founded a kiosk company. I wrote a platform with a C core and Lua on top and used it to manage some pretty big kiosk networks, as well as handle all of the "server side" logic of our apps: pathfinding, hardware integration/card readers, printing, etc. . . I definitely miss working with Lua and C. We used Flash for the UI on most of those machines and our shit looked _sooo_ much better than anything else on the market at the time. Funny enough I used the bindings for SVN to programmatically handle pushing out updates, I imagine git would have been much more pleasant.

Anyway, thanks for the trip down memory lane.


This is great. How is the business side of it?


Looking good. Once customers realize that our platform is far more powerful that just throwing a browser at everything, they get really excited. You can fully automate everything with the API, build perfectly smooth 60fps content and even play synchronized content across any number of screens. And a lot more. Coupled with the low cost and the reliability of the Pi, it really looks great. Still, getting the word out is difficult as the Pi is still considered a toy by many.


Just saw the demos, and they look great. Back when I ran this sort of thing I moved from Pis to Android boxes for our own in-house system for SVG support (https://www.flickr.com/photos/ruicarmo/albums/72157643937892...) but often wished I’d had a simpler, more efficient setup.

How good are the Lua bindings for OpenGL ES? Are they as nice as Löve? I tried https://www.mztn.org/rpi/rpi_ljes.html once, but it was a bit fiddly.


info-beamer pi (the software that runs in the Pi and drives the output) doesn't expose OpenGL directly at a very low level. Instead it provides a higher level API to draw and move images, fonts and videos around (see https://info-beamer.com/doc/info-beamer#referencemanual for the complete API). So it's indeed similar to Löve. As a programmer you really don't have to know OpenGL to get anything done. Right now info-beamer pi doesn't have SVG support but images and even videos get you pretty far for most effects I can think of.


Just a couple of thoughts/questions if you don't mind:

- How's the supply-chain side of things? For a while, vendors would only sell one Pi at a time, if they had stock at all. Have you had any problems with that?

- Why do your customers know or care that there's a Pi inside? It seems like an implementation detail that only a select few would ever think to ask, unless you're telling them. "COTS ARM Cortex A53" would likely be enough to make most people's eyes glaze over before digging enough to discover that there's a "hobbyist" board inside.

Edit: I looked at your website and get it :). You're selling hosting, not pre-packaged devices. That's really interesting!


Supply for the "normal" Pi has never been a problem AFAIK. Only the Pi Zero has these problems. But as you noted: We don't sell prepackaged hardware: Only the software and service. Shipping/Warranty/Return handling seems pretty complicated and not worth the hassle (at least for now). Instead we made the installation as simple as possible and as a result users only have to unzip a single ZIP file to an empty SD card and put that in their Pi. So far even to most non-technical users managed to do that.


That's awesome! Congrats! That's a really cool niche, and it sounds like you've executed on it in a pretty interesting way. Although I haven't done it recently, I did a fair bit of Lua embedded in C in my M.Sc. and found it to be a really smooth way to add a ton of power to C code without needing to do a bunch of work.


That sounds like such a perfect fit for Lua. We used it to the same effect in gamedev.

Coroutines are also amazing for sequenced AI routes.


Coroutines are also useful for some of the visual code I wrote: You can have functions that run through an animation, yielding for every frame. Or of course you can handle loading, displaying and teardown for content in one function. Pretty handy.


> 3. Erlang runs on beam which is a well established VM that has years of development behind it making it a robust core for your project compared to some other operating systems that you may load onto your RPI.

Not sure. Of course, the BEAM VM is robust and good for the things it's good at, but I don't think it's much more "robust" than, say, Linux, which seems pretty good, stable and robust these days. I'm not talking about software that runs on Linux; though the coreutils etc are pretty stable at this point (understatement). Linux is a fine platform to run your project on, and I claim it's no less robust than BEAM is.

Nothing against your other points though.


I think you might be missing the point about BEAM. It's not that BEAM itself is more robust than Linux. BEAM and OTP enable the creation of highly fault tolerant services through the use of supervision trees, among other things. These restart failures from last-known-good state within microseconds. While Linux itself is certainly stable it doesn't provide any analogous facilities for creating robust services.


Linux is robust and fault tolerant by design. This comes at the cost of higher performance overhead, of course.

Besides, why are you comparing BEAM to a full-fledged OS kernel?


Linux is not an high integrity class kernel.

In fact in high integrity certified deployments, it is another actually robust and fault tolerant kernel running Linux kernel as yet another user process.

Kernels like INTEGRITY are robust and fault tolerant by design.

https://www.ghs.com/products/rtos/integrity.html


My understanding is that the OS is a few gigs in size and uses a lot of memory, and neves compiles to about 20 megabytes while still being resilient and easy to update.


Actually, a compiled Linux kernel is on the order of ~5 MB. A minimal root filesystem adds another ~50 MB to that. It starts to get bloated once you add kernel modules, drivers, etc.


That's 5MB compressed and also doesn't account for the fact that you'd also need an actual userspace of some description.


When I start writing applications in kernel space, i'll take "Linux" as a more serious contender for an application platform. you're the one making the comparison, so I'm pointing out that Linux provides very little in the way of application-level tools.

Edit: sorry, i see you're not the GP commenter.


> Linux is robust and fault tolerant by design.

How is Linux fault tolerant. Say a kernel driver starts overwriting kernel memory how do the rest of the kernel subsystems isolate that fault and keep going without crashing?


To clarify, Linux is fault tolerant where it needs to be: keeping userspace faults contained.

And what happens when a core module that is part of BEAM starts overwriting critical BEAM VM data structures?

A more apt comparison would be software running on top of BEAM vs. a userspace process running on top of Linux.


> To clarify, Linux is fault tolerant where it needs to be: keeping userspace faults contained.

Agreed.

> And what happens when a core module that is part of BEAM starts overwriting critical BEAM VM data structures?

Segfaults and other terrible things.

> A more apt comparison would be software running on top of BEAM vs. a userspace process running on top of Linux.

That's a better analogy of course. I've heard of BEAM VM described as an "OS for application code". Nobody would want to put their latest and greatest crown jewel production code on a Windows 3.1 platform, where one segfault in the calculator process takes down the word processor, but that is essentially what is happening when using shared memory and concurrency units (threads, goroutines, co-routines, green threads etc).


> , but I don't think it's much more "robust" than, say, Linux,

It is much more robust than Linux. If Linux kernel has a segfault and crashes, it takes everything with it when it panics. If one of the million Erlang processes which has an isolated heap crashes, it can probably safely restart (maybe with a few other ones it is linked with).


Bad analogy, but another commenter has already pointed this out. :)

But I do suppose that Linux has a higher chance of having a kernel panic than BEAM having an internal segmentation fault, simply since Linux has more code and thus a larger surface area for bugs.


Right, right. Agree. I think the right comparison is running on Linux vs running on BEAM VM. Linux became popular, among other reasons, because it allowed for strongly isolated processes. Those can crash, stop and that's fine. Other processes of other users don't even care or notice. Erlang VM solved the same problem for an application environment - millions of lightweight processes and run, individually crash restart because they are isolated.

To extend the analogy: running with threads that share memory between themselves is a bit like running on a Windows 3.1 machine - were if the calculator is broken it can crash and overwrite the memory used by the word processor


"What makes elixir/erlang good fit for rpi?"

Honestly, I'd say it isn't a good fit for any particular reason. jbhatab isn't wrong, but there's other environments that will have other advantages over Elixir/Erlang/BEAM, depending on what you want to do.

What is the case here is that the Raspberry Pi is basically a little computer and can run pretty much anything, Erlang included. Erlang was built in a world where a Raspberry Pi's specs would have been mindblowing, so it works just fine, just like anything else first built in the 1990s.


IMO the BEAM run time is ideal for it.

If I were writing code to drop onto a small device, I’d want the capability to efficiently have a lot of things run at the same time with a small processor, to know that a single heavy unit of work wouldn’t negatively affect the responsiveness of everything else and to know that all of those tiny pieces were built to basically never go down.

Personally speaking, it would be difficult to imagine using anything else for that type of work.


The ability to utilize the BEAM's fully pre-emptive processes makes it great for programming hardware tasks and higher level tasks in the same environment. Compared to other languages like Lua, Node, Python, or even Go which all use various forms of cooperative multi-tasking or async-io. In these systems you have to be careful not to block other critical tasks. That's a pain but not the end of the world.

However, for me the best feature is that I can remotely log into a running BEAM VM and interactively explore the live system [1]. Since Erlang has been used in "embedded" type systems for a long time there are a lot of useful libraries. For example a tiny bit of wrapper code let me setup a secure remote Elixir REPL using standard SSH keys for our embedded devices [2]. You can also run entire "applications" much as you would a system service, but which you can communicate with natively in Elixir/Erlang. Also the support for running sub-processes as ports is really nice.

In general it's really more like a live operating system which you can program and investigate. The primary thing lacking is a capabilities system to allow running non-privileged code in a sand-boxed manner.

1: https://tkowal.wordpress.com/2016/04/23/observer-in-erlangel... 2: https://github.com/elcritch/iex_ssh_shell


Not an Erlang user, but I'm tickled by the idea of having a reliable cluster made out of unreliable little computers. Erlang has a great reputation for process supervision.


It would be fun to build a redundant hardware control system which used three cheap SBC’s to do a form of Triple modular redundancy [1]. It’d be straightforward to prototype with Nerves/Elixir!

1: https://en.m.wikipedia.org/wiki/Triple_modular_redundancy


The BEAM VM's preemptive multitasking may be a more efficient use of rpi's limited CPU in comparison to the cooperative multitasking of say NodeJS for example.


Broadly speaking, Node is faster than BEAM: https://benchmarksgame-team.pages.debian.net/benchmarksgame/.... Unless you're going pretty crazy with concurrency, BEAM isn't going to catch up to Node.

BEAM is a beast when it's in its own runtime managing concurrency, but the bytecode interpreted language it implements is very "meh" when it comes to performance.

(Since people seem to take these sorts of assessments personally, full disclosure and cards on the table: I massively prefer the BEAM world to the Node world. Nevertheless, the facts are what they are. The interpreter for BEAM is just not where you go for performance.)


These are mostly computational tasks... Why would you use BEAM for these?


So, pure curiosity: Node is amazingly excellent at IO. Its like the one thing it does really well. Its traditionally pretty bad at computationally heavy tasks, but it still beats BEAM there.

What do you use BEAM for? Is rock solid supervision really its biggest boon?

And if that's the case, how does it compete with the proliferation of easy to use tech like Kubernetes that, more or less, solve the supervision problem in a simpler, easier to scale, and more abstractable way?

There are parts of Elixir I really quite enjoy, but I've long felt that BEAM is holding it back as much as it benefits it, similar to how the JVM is both a boon and a "modern curse" to Java. The 90s dream of having these VMs executing platform independent bytecode seems dated in the face of infinitely customizable VMs on cloud hardware. And process supervision at the application level also seems dated in the face of modern devops HTTP-level liveness and containerization.


Is kubernetes actually easy to use? I'm also a little bit scared at the pace at which it's development is happening. Maybe I'm old, but I'm worried that it will end up in a state like JavaScript - where I can't make heads or tails of things like promises and classes, async, etc.


Seriously. Elixir/Erlang are VERY clear about the VM (BEAM) not being particularly good at intensive computation. Use a port (or NIF if you really need to).


don't do intensive computation in a NIF! You'll screw up the scheduler. Use a port.


Since OTP 20 BEAM has had dirty schedulers for unsafe NIFs. They're more suited for computational tasks.


Is it possible to assign dirty schedulers to isolated cores? Like if you have 8 cores, can you say, "6 of these are for the regular schedulers and 2 are for the dirty schedulers"? If your CPU intensive dirty code is running is on the same cores that your normally pre-empted code is running on, I have to assume that performance is still going to degrade a bit.


Ok, I'm not an expert here, but you have several options available to you when determining your scheduler topology. What you'd want to look at if you really want to make sure to bind your dirty scheduler to a particular core is the +sbt option[1]. However, that's probably not a setting I'd tweak lightly—the OS in general and BEAM in particular is going to do a better job at that than you are under most circumstances.

There is definitely a cost to using the dirty scheduler, and if your NIFs don't need it, you're going to be paying the overhead for nothing. But obviously there's a plethora of uses for them when integrating with libraries that don't play nice with chunked work.

http://erlang.org/doc/man/erl.html#+sbt


sure, but you might not care so much for throughput in a situation where latency is critical


erlangs bit matching syntax, case statements for structs... incredibly elegant. I don't know about Rust or Lua, maybe they have this too.


In particular, the elixir_ale library is a very pleasant interface to various bits of hardware you might want to interact with at the GPIO/i2c/etc level.

https://github.com/fhunleth/elixir_ale


Rust doesn’t have the bit matching syntax, but we do have match generally.


It'd be interesting to explore something similar to the bitstring[1] library for Ocaml. Where should one start if exploring syntax extensions for Rust. I haven't looked into rust in a while now so i'm not sure how far one can get with macros these days.

[1] https://github.com/xguerin/bitstring


“Macros 1.2” is what you want to look up; it’s in FCP right now!


MetaLua does!


I think I met the guy behind Nerves a while ago when he presented at an Erlang/Elixir meetup in DC, and had a good conversation. Smart guy, and the project was impressive.

Also, as other commenters have pointed out, embedded doesn't mean what it used to. Embedded now can range from a custom real time operating system written in embedded C with memory allocations from a static pool to Java running on an Arm processor.


Are you part of the DC Elixir community? I am too. Haven't had too many meetups recently.

I too met the Nerves dude, very nice and approachable. I was proud to have such a project come out of my area.


I only can say that I enjoy Elixir a lot and wish I had a project in mind where I can use Nerves. I am sure it will come.


This looks very interesting but I haven't found any mention of GSM connectivity. Is there any?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: