Hacker News new | past | comments | ask | show | jobs | submit login
Should you Rust in embedded yet? (kazlauskas.me)
164 points by blacksmythe on Feb 28, 2018 | hide | past | favorite | 118 comments



Embedded developer here, ARM Cortex-M4 microcontrollers. I've been keeping an eye on Rust for the embedded space and although there has been a lot of movement in that area--particularly in the last couple of months--I'm not sure the value proposition fits the microcontroller market, where of course C is king.

While Rust has much to offer as a programming paradigm in general, the main value is in the borrow-checker (the linked transcript cites 'memory bugs' as the most common class of bugs). Embedded software practitioners long ago abandoned dynamic memory allocation and with it the 'use-after-free' and 'out-of-bounds access' bugs, instead re-defining the problem as one of latency (e.g. you'll need to process that static buffer before it gets re-used by your UART interrupt). Take away the borrow-checker, and Rust looks less compelling.

In time, Rust will find its niche in the embedded space, most likely occupying the high-level RPi/BBB/iMX SoC layer and perhaps working its way down to microcontrollers. As wiremine points out, it will require vendor support--moving away from your vendor's toolchain is a world of hurt that seasoned embedded developers just won't even consider. Pragmatism reigns: time-to-market and a cheap BoM are the main metrics, programming language a distant 10th.


Also working in the same space. I had the opportunity to evaluate Rust for our development environment back in late September. The killer feature it offered us is serde - rust's general purpose SERializer DEserializer library.

So much of our code is centered around taking measurements with a bare metal system and then transmitting them to a linux box for processing. Being able to just write `#[derive(Serialize, Deserialize)]` above a struct and then be able to send/receive it across the channel via `rpmsg.send(mystruct)` or `let mystruct: MyStruct = rpmsg.recv()` is magic. Furthermore, by encapsulating each possible message type as an enum variant, match statements provide a really great way for dispatching the message to the appropriate handler after we deserialize it.

As for the borrow checker, I actually did find it useful in bare metal. But more for handling hardware resources. Different measurements require different sets of power supplies to be activated, and exclusive control over different I/Os, etc. The ownership model made it easier to ensure statically that we sequence the measurements in a way such that the different measurement routines can never reconfigure resources that are in use by a different routine.

Anyway, we sadly aren't using Rust yet in production, even after that. Holding off until we start the next product.


What data format do you use with Serde on embedded? JSON? I read somewhere that Serde works in no-std environments, but wasn't sure whether it does that with all possible data formats.


Serde is middleware, which really just shuttles calls between a serializable object and the serializing backend in a standard (and performant!) way. That middleware works in no_std environments, but not all backends do.

I'm not up to date on which backends support no_std, and which backends support no-alloc - some backends support no_std but require an allocator. When I looked into this in Sept, ssmarshal was the only general-purpose backend I could find that supported no_std & didn't need an allocator. There was some talk of adding no_std support to bincode - looks like it hasn't gone anywhere: https://github.com/TyOverby/bincode/issues/189

My one gripe with ssmarshal is that - in Sept - it would refuse to serialize collections whose size isn't compile-time constant. Obviously, you aren't going to be serializing Vec, Map, etc, in a no_std environment. But one could very well wish to serialize stack-allocated equivalents (e.g. arrayvec, where you have a vector that stores all data on the stack and grows up to the space allocated for it). In order to serialize an arrayvec, I had to write wrapper code that serialized the entire underlying fixed-size storage, regardless of how much was actually in use.

Things move fast in rust-land - ssmarshal might have a feature that allows serializing dynamically-sized types, or there might be new/more versatile backends since Sept.

I think the most difficult thing about deserializing JSON in a no-std environment is that strings can have escape sequences. So when you deserialize a string, you can't just pass a reference into your buffer up to the frontend - you have to decode the string. Usually one would heap-allocate in the backend for that, but if you have the ability to mutate the buffer you're deserializing from, I don't see any fundamental reason why you couldn't decode the string in place and then yield it - I'm pretty sure all encoded JSON strings are at least as long as their decoded version. The easy alternative is to deserialize the string as a [Char] sequence (i.e. pass it to the frontend character by character) and let the frontend worry about memory management, which isn't even necessarily so bad, with things like ArrayString.


Just that Rust is known for its borrow checker doesn't mean that the borrow checker is the only type of safety that Rust offers. The old standbys are still valuable: bounds checks, null pointers, compile-time data race detection.

> Pragmatism reigns: time-to-market and a cheap BoM are the main metrics, programming language a distant 10th.

I find that I can develop software faster with Rust than with C, simply because of language and standard library features: closures, iterators, a real string library, a vector type, hash tables in libstd, better unit testing support, etc. Development speed can affect time to market.

As the industry matures, though, reliability generally becomes more important. And "embedded" covers everything from IoT light bulbs (correctness less important…for now) to avionics (correctness extremely important).


You should be comparing to c++ and not c, since if you are looking for those features, you'll find them.


Embedded systems typically doesn't use C++ either. I think it's still a fair comparison.


In my experience is usually a mix of both.


Null pointer is actually perfectly fine on bare metal. There’s no memory protection, so it just points to address 0x0, and if you deref it nothing bad will happen.


This is not true at all.

First, of course, there is no requirement for NULL to map to address zero.

Second even if you do en uo there, many architectures don't even have memory at 0x0. Spurious writes are spurious writes regardless of whether or not you get a fault. You are still not doing what you want to be doing.


even worse, there might be something there like the exception vector table, in which case spurious writes become an attack vector.

https://cansecwest.com/slides07/Vector-Rewrite-Attack.pdf


Ones I worked with did nothing on when reading from 0x0. I mentioned this because for someone who spends all their time well above bare metal this is not intuitive at all. And null is de-facto 0 on all C compilers, even though it’s not required to be. So let’s not engage in hyperbole here.


But it is never what you want. So even if there are no immediate explosions, your LED's not going to blink the way you expected it. I'd say a clear, stern sign that something was wrong is better than limping along after dereferencing null.


Sure. I’m just saying that null pointer deref and read is not generally a fatal operation. Most programmers expect the program to die in this case. When it doesn’t, they are surprised.


In many ways I would say a non-faulting non-protected read or write from 0x0 is worse than a crashing protected read.


In many ways, generalized statements like this for all possible controllers and software out there are worse than understanding that accessing address 0 from high level code can have its uses and be completely correct. ;)

In a less condescending tone, if some HW designer put control structures at address 0 and they are writeable, then you have to deal with it in software. If there is no MMU that can remap that memory range, you will end up having legitimate memory accesses to that area. They can only be distinguished from accidental null pointer dereferences by context. This context would need to come from the developer by annotating the source somehow.


If the software is unintentionally reading or writing address zero, it’s by definition not functioning properly, but because of the lack of memory protection/safety this failure mode is going undetected. Rust won’t stop you from intentionally accessing 0x0.

This seems among the hardest bugs to track down I could think of, regardless of what is mapped at address zero. I don’t think it’s condescending to say software that begins operating incorrectly in an undetectable way is always bad.

Have you had to track one of these down before? I haven’t, But I have had to track down silent memory corruption issues in memory unsafe languages in the past and it can take days, on the desktop, with good tooling, I can’t imagine doing it on an embedded system.


If tracking down these kinds of errors on the desktop took you days, your tooling was maybe not good enough. I honestly cannot remember a an instance where valgrind completely failed me. This is lind of my gold standard for debugging memory issues.

Also, some microcontrollers have amazing debugging support these days. Instruction tracing on Cortex M devices is a great feature, for example. The CPU will log every instruction that it executes over a serial interface for the hardware debugger to store. This allows you to go back in time after the fact, something that desktop debuggers have a really hard time with.


My point is, with a language like Rust, you can pretty much throw all this away. Why put yourself through this intentionally?

I also feel you're dodging my question. A 1-in-1000 spurious write to 0x0 is something you'll have a terrible time even identifying as the cause of your failure specifically because it is completely silent. Your embedded system just happens to stop working sometimes, where do you even think to begin? Assuming you know this is why, sure, throw on a watchpoint and call it a day, but how did you connect "heater stops heating" to 1-in-1000 write to 0x0?

You don't have to worry about that with a language that wont even let you make that invalid program in the first place.


Well, this hasn't even been an issue for us in the last couple of years, even though we use controllers without MMUs. We have a quite complex C++ codebase and our coding style catches a lot of these mistakes outright.

Rust is simply not an option for us because of a distinct lack of tooling available for it. We need a ISO 61508 qualified toolchain including testing frameworks and there is none in sight for rust.

Also, out of interest: has anyone ever tried to write code in rust that is protected against bit flips caused by radiation? Our code is able to detect this because it stores long lived values also as bit inverted patterns and compares them regularly. This does not allow us to recover outright, but we can at least fail gracefully and attempt to reboot the device.


I take it you didn't do much development on machines without memory protection. The problem isn't reading from 0x0. It's writing to 0x0 (and beyond), clobbering system memory , memory mapped IO, or even your own application.

I learned C on an Amiga, back in the late 80's. A bad pointer typically resulted in "Guru Meditation" error (OS crash), followed by a reboot.


You might want to check out https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h..., it goes into other types of concurrency in Rust, some probably more interesting to embedded devs than the general dynamic allocation stuff. The key takeaway is that the borrow checker isn't only good for dynamic memory allocation in GCless environments (in general even the stack allocated variables which we have in embedded profit from the borrow checker).

The other thing is that Rust offers a lot of very useful abstractions (e.g. enums with data) and a strong type system, which is sorely lacking for C.

The embedded systems industry is one of the slowest industries to adopt to new technologies, so I'm not holding my breath, but I think everything is there in Rust to make it a good embedded language.


You don't need dynamic memory to make a ton of memory errors. There's still lots of possibilities to have dangling pointers to somewhere else on the stack, memory issues with objects pools, memory issues due to race conditions in multitasked RTOS systems, etc.

On top of those there's misunderstandings whether a char* is actually a pointer or an array, misunderstandings who knows those, etc.

I've seen enough of these issues in an RTOS project to believe that Rust (and even modern C++) will be a huge step up in overall quality and productivity.


It is not just the borrow checker. The ownership system / move semantics and powerful static type system with traits and generics helps to create some interesting abstractions (which have very little run time overhead) - pleas check: http://blog.japaric.io/brave-new-io/ as well as other articles on that blog.


The borrow checker is not inherently about dynamic memory allocation. Heck, my toy x86_64 OS doesn't even have an allocator at all yet! Rust's features, including the borrow checker, are still useful here.

I really like https://os.phil-opp.com/page-tables/, for example, which talks about using Rust's type system to ensure safe page table usage.


>As wiremine points out, it will require vendor support--moving away from your vendor's toolchain is a world of hurt that seasoned embedded developers just won't even consider.

I wont consider a chip/microcontroller if it doesn't support open source command line tools.

Vendor support is technological debt that hinders every bit of testing and automation.


I used to work in a space that was once considered to be embedded (POS systems, including some based on low-cost 8bit controllers, m86k based stuff and low-end ARM at the end), and got out of it just before the entire thing was taken over by fast 32bit or even 64bit ARM CPU's, which eventually moved to running stock Linux or Android. As ARM chips become cheaper and cheaper, the low-level embedded fields will shrink further and further, and the line will shift more and more towards running a full-blown OS below the actual applications. I've even encountered a platform marketed as being "realtime" which was running Linux.

It will always be there, micro-controllers are dirt dirt cheap, and in many cases more suited for industrial environments, and realtime will always be there, but it will become more & more specialized. And as you say, once you get down to a certain level the borrow-checker is a useful feature so I expect it won't be such an interesting target for Rust, and this will probably remain C's stronghold.


And even where Linux won't go, mbed os and it's ecosystem could take - as long as the requirements for faster time to market will continue.


The borrow checker should be quite useful when using tasks and shared resources in an RTOS. Just closures alone is a benefit.


Your compiler adding the execution of an unpredictable amount of CPU instructions at certain points, which can change with new compiler versions? That sounds like an nightmare for realtime applications. Yes, I have been in situations where I had to count the instructions and clock-cycles to meet deadlines.


What feature adds that? The borrow checker, lifetimes and ownership are all compile time concepts in Rust.


Compile time doesn't mean the compiler won't add more instructions where you might not expect them.


That's true of any compiler of any language.


Are you suggesting the borrow checker adds instructions? I feel the meaning of my comment was pretty clear to mean “strictly compile time”.


Why do you think this will be a problem? I mean any more than with any optimizing compiler (C etc).


To me, the power of it isn't just in dynamic memory but also in that Rust has clear move vs ref vs bit copy vs logical copy (clone) and use-after-free protection.

As one example of the benefit of this. This makes me feel a lot more comfortable writing state machines in Rust's type system.

See https://hoverbear.org/2016/10/12/rust-state-machine-pattern/


Why is C king? Why not C++? The difference is just a compiler, and you can use C in C++.


C is king because the industry is currently dominated by people who have been doing this since before C++ was a thing.

Additionally, most of these people are primarily electrical engineers, and don't have as strong of a background in computer science. They've been using C for decades and it does everything they want, why would they take the time to learn the boundless complexity introduced by a language that offers them (what they perceive to be) very little?

Talking to long-time C programmers about C++ is actually surprisingly difficult: https://youtu.be/D7Sd8A6_fYU


> C is king because the industry is currently dominated by people who have been doing this since before C++ was a thing.

No. If C++ were a great language those C coders would have moved over in an instant. One of the advantages of looking at C code is that you can actually figure out in your head what the assembly will look like.


They'd move over if technical considerations were the only reason why people choose programming languages. In my experience it tends to be psychological ones.

> you can actually figure out in your head what the assembly will look like

I keep hearing this, and I don't buy it. Did you know that `gcc -O3` will turn `int add(int x, int y) { return a + b; }` into an `lea` instruction? I doubt many people do.

And it's not like the compiler will magically switch to emitting different instructions if you compile the code above as C++...


Psychological ones? No, simply the lack of available reliable compilers for some platforms was my problem. I developed POS applications, where C++ would have worked fine, if we had a decent C++ compiler on every platform we wanted to support. Some platforms used GCC, but most used proprietary compilers - where C++ support was completely absent or very scetchy. When you can't use exceptions, the memory allocator is absolute garbage and leaks stuff on it's own and encounter various random compiler bugs, you quickly decide to stick with plain old C. C++ in my experience was an absolute mess when it came to embedded work (note that the last embedded work I did dates back from 2006, so not sure what the current situation is)

Also, C++ uses a lot more memory, which can also be a no-go when you get as little as 32kb for code+data, luckily with in-place execution.


> Also, C++ uses a lot more memory...

Depends on how you use it. "If you don't use it, you don't pay for it" is the C++ philosophy. If you use it as "C with objects", it should use no more memory than C with structs. If you use it as "C with polymorphism", it should use no more memory than C with function pointers.


Modern C++ does just fine on a Commodore C64.

CppCon 2016: Jason Turner “Rich Code for Tiny Computers: A Simple Commodore 64 Game in C++17”

https://www.youtube.com/watch?v=zBkNBP00wJE


Sad that it took C++ almost 40 years to get there.


I was doing C++ development on MS-DOS already in the 90's.

Never cared for C beyond using it in Turbo C 2.0 for MS-DOS, and later when required to use it for university projects and client projects that explicitly required ANSI C89.

So it wasn't 64 KB, but it was perfectly usable on 640 KB systems.

The main problem has always been fighting misconceptions.


> I keep hearing this, and I don't buy it. Did you know that `gcc -O3` will turn `int add(int x, int y) { return a + b; }` into an `lea` instruction? I doubt many people do.

Uh. That's a pretty obvious one.

Sometimes using address generation ports is preferable to ALU ports.

Also 'lea' can load the result in a different register from both operands, 'add' will always need to modify a register.

People have been using 'lea' for calculations since dawn of time, for example:

  shl ebx, 6
  lea edi, [ebx*4 + ebx + 0xa0000]
  add edi, eax
== y * 320 + x + framebuffer address.

This was a common way in DOS days for calculating pixel address in mode 0x13.


> One of the advantages of looking at C code is that you can actually figure out in your head what the assembly will look like.

One can do this with most C++ too. Though admittedly, non-tree virtual inheritance hierarchies, as well as member function pointers [et al] make this harder to achieve universally. I will also admit that it's easier to do with C.

If the optimizer gets its hands on either though, you may be in for a surprise no matter your choice.


C++ extra object copies can sometimes be pretty difficult to see without checking disassembler listing.


Only when compiling without any kind of optimizations, nor using vector instructions.

In any case, C++ is copy-paste compatible with 99% of C89. So same benefits apply when using that subset.

It is plain language religion as observed at a few C++ conference talks.


I think you're not giving engineers enough credit here.

The world moved from C++ to Java on the enterprise side back in the late 1990's. Why? Java was arguably faster and easier to develop in, even though many thought (including me) that C++ was technically a better language.


So, I will let one of the renowned experts speak instead.

CppCon 2016: Dan Saks “extern c: Talking to C Programmers about C++”

https://www.youtube.com/watch?v=D7Sd8A6_fYU

Embedded Development with Dan Saks

http://cppcast.com/2016/10/dan-saks/

Regarding Java vs C++, yes the enterprise world has adopted Java, however as someone doing consulting across Java, .NET and C++, I am really seeing it coming back since ANSI C++ has picked up steam again.

I see it in projects related to IoT, AI, VR, big data,....

They are all polyglot projects with C++ plus something else, not C plus something else.


It is very hard to get a Java or Python programmer (what those AI guys want to use) to move to C, even if they HAVE to use something native. So C++ is where they end up.


This whole thread started about embedded development.

As noted, unless we are speaking about PICs with 8KB and similar, the majority of them can easily be targeted by C++, which is what Arduino and ARM mbed do.

Already in MS-DOS, on 640KB computers, using C made little sense.

When we needed performance, Assembly was the only option, because the code any compiler was generating in those days was average quality on their better days.

When performance wasn't that critical, then the improved type system, RAII, reference types, type safe encapsulations were already better than using plain C.

We even had frameworks like Turbo Vision available.

So if something like a PCW 512 didn't had issues with C++, so a modern micro-controller can also be targeted by it, except for political reasons.

Developers that are against anything other than C, even if their compiler nowadays happens to be written in C++ (e.g. gcc, clang, icc, vc).


Ah, I forgot about the assembly.

Is assembly important in microcontrollers?


Sometimes. Usually you just write "normal" C, until you realise your single `sprintf` use took 20% of your ROM size. Or until you need some interrupt handler to take no more than N cycles. You probably don't switch to assembly at that point, but you definitely start checking what the compiler output is and where are the bytes/cycles wasted.

Actually writing assembly is more of a last resort time.


Because of cost we use very constrained microcontrollers; every byte literally counts. In the end it really matters cost wise (in mass production embedded every cent counts as well; using a high spec mcu just costs more) but we had to rewrite from C to assembly to get a few more kb for features in the flash. C++ or Rust are generally not good for the cost of materials.


Writing assembly tends to be restricted to the bits where you need it - special function prologues for interrupt handlers, requiring a particular unusual instruction for something, time-sensitive or cycle-accurate "beam racing" code.

Reading assembly is more useful, especially if your platform's debugger isn't very good.


I would say it does. I learned microcontroller programming in assembly before I learned about C.


Let me ask it this way:

I'm a higher level programmer. Later in my career I happen to get down to microcontrollers, what would stop me from using C++?


The standard library (with all its' duplicate code resulting from hardcore templating) will blow up your flash space usage significantly, to the point where you will run out of it sooner than you expect. You will spend time finding alternative standard libraries that are size-optimized and you might end up rewriting a lot of what you take for granted in your C++ daily usage. For example, the Arduino environment is C++-based, but it's not anything like on the desktop due to it not shipping an std:: .

Your typical heap-happy usage will not go down well on a microcontroller, either. Having very constrained RAM makes heap fragmentation much more of an issue.


Then don't use the standard library. Don't even link it in.

> Your typical heap-happy usage

Huh? 1990s C++ was typically heap-happy, which is part of the reason Java looks the way it does. Idiomatic modern C++ uses the stack as much as possible. And one can use custom allocators.


Not a good fit for systems with small hardware stacks - PIC16 is still in use http://www.microcontrollerboard.com/pic_memory_organization....


A lot of people do. But they may as well not. They tend to write C-style C++.

Simply put, because you can't use the STL, or a lot of other C++ features, or only with a lot of consideration.

A whole swathe of the embedded world still cares about program size in bytes. There are some that don't, but they tend to be using Linux, and are at a higher abstraction level than many others in the industry. (Industry is kinda divided in half. Those who use tiny Linux machines, and those working with microcontrollers. It's a generalization, but generally fits.)

The stuff I work on day-to-day, usually has between 1-4kb for dynamic memory, and 8-16kb for the compiled program. That line is also usually a bit blurry, and you can move things between both at runtime, but at various costs.

With C++, you get tempted to use stuff like vector, which can blow your memory stack.

I generally work with C++, but it looks like C. I get a few things like implicit pointers, for free, but generally still have to end up making most things explicit.

But, unlike twenty years ago, I no longer have to dive into assembly unless the project is pushing it's limits. The compiler tends to be "good enough".


It really depends how you use it. If you are approaching from the standpoint of "I am writing code on a microcontroller", which means no exceptions, no static initializers, probably no templates, definitely no rtti, then it will be all okay. If you approach it from the standpoint of "I'm just programming, how hard could it be? Let's just use std::", you're going to have a very bad time and very quickly.

C++, when used in that way, tends to do a lot of things behind the scenes. This is perfectly okay in a place where you have an operating system and a linker that have your back. On an embedded system none of this is guaranteed.


Static initializers and templates are great in a deeply embedded context, IMO.


For you yes, since you know how they are implemented at linker level. But get a few junior devs on your team, and you'll be wondering why hundreds of thousands of cycles run before main is called, or why some driver code is being entered before it is initialized, since someone decided to make a static singleton object for some driver and called some driver method in the constructor which will run before main(), not realizing how this stuff really works underneath.

So, C++ can be a wonderful tool in proper hands, but it is much easier to misuse than C in an embedded context.


If you let junior programmers fuck up your codebase that much, that's not the junior's fault. You should do code reviews.


I lead a team of six, including some junior devs.

How initialization occurred (static or otherwise) was just something we made an explicit check off item in code review.


Awesome. Glad it worked out for you. I hope it stays so after you leave the team or no longer have time for each code review.


Constexpr init can also be great in that case To initialize rom with complex objects.


note, static initializers works in embedded you need to execute functions between __init_array_start and __init_array_end before using any static object in gcc somewhere in program


I'm well aware of that. But giant array of static Constructors makes it hard to reason about when any piece of Hardware is accessed since a lot of stuff happens before Main (). Especially for people who don't play with the insides of linkers for fun on weekends.


Sorry that sounded like i'm scorching you i didn't mean it. Yes agree with when using HW initializing code in constructors I ended with two types of 'constructors/inits': 1 - for object initialization 2 - just for hardware and calling manually :/.


Nothing. I've worked on Cortex-M4 projects in C++. It's nice in many ways. The people working on the project had a much more diverse background than the typical EE who learned C as an undergrad mentioned in another thread.


Not much. The Arduino programming environment is really gcc in C++ mode, but a different library.


It's more difficult. I used C++ in many projects for years and enjoyed working with it. You have to be very disciplined, though, and you need to know a lot about how it works under the hood to get it to play nicely, but it has it's niche. Embedded is not it's niche IMHO (and I've also done embedded, where we specifically chose to use C over C++).

Really, whenever you are doing any kind of embedded or real time project you are basically doing "resource limited development". The resource can be memory, I/O, CPU or any and all of that (or more). You need to be able to control exactly how it's used. C++ is often used to abstract you away from those things -- which is exactly the opposite of what you want.

C is a high-ish level language that is close enough to the metal that you can fairly easily understand the implications of what you are doing. C++ is not and it's incredibly easy to build a monstrocity that chews memory -- not just working memory, but application size too. Even dealing with name mangling is a surprising PITA when you are dealing with embedded -- remember embedded means you often have to build your own tools because nobody else is using your platform ;-).

Like I said, I actually like C++ (or at least C++ of a couple of decades ago -- the language seems to have changed a lot since I last used it, so it's hard for me to say). There are a lot of times where I simply don't care about controlling resources to that level. These days there are a lot of other choices and I'm not sure that I would ever choose C++ for a project again, but definitely back in the day it was something I reached for quite a bit.

WRT Rust, I agree with the OP that the borrow checker is really nice. I recently spent some time playing with Rust to see how easy it was to implement higher level abstractions. One of the things I was really impressed with was how hard Rust slaps you when you try to do something that would explode your memory footprint. It still feels a bit immature to me, but it has tremendous promise (and if you don't mind working around the immaturity, it's probably fine to use at the moment).


About 15 years ago I did some PIC16 programming immediately after a lot of C++, so I tried working in a C++ style.

The first obstacle was that there was no C++ compiler.

So I wrote some very C++ style C: nice little structs with associated functions for mainpulating them, which took a pointer to struct as first argument.

The code did not fit in the PIC.

It turns out that the PIC16 lacks certain indirect addressing modes, so every access to a structure member from a pointer turns into a long sequence of instructions to do the arithmetic.

Oh, and this particular chip only allows you a maximum stack depth of 8, so you have to ration your use of utility functions. The compiler is bad at inlining so macros are prefereable.

By the time I had finished it was an extremely C program with no trace of C++ style at all.

The situation has got a lot better but there are still limitations which will trip up the unwary. And one day someone's going to point out that they can save $0.50 on every one of a million devices if you use one of these tiny chips with no indirect addressing and limited stack.


I find it easier to use C and assembly on very constrained devices becauee I know what the output will be; with C++ it is less clear. If you write code that has to fit in 24kb, you need to think about what every instruction looks like after compilation and that just is far easier with C and (obviously) asm in my experience.


I’m not an embedded developer, but my guess is that if they’re not even using dynamic memory, I doubt they need or want anything that C++ has to offer.


That's more a matter of experience and attitude -- even simple things like reference types are nice. Also, templates offer a lot of abstraction power that can be used to model the hardware nicely, without sacrificing efficiency.

Many embedded programmers come from a background that doesn't expose them to those sorts of ideas though.


Can confirm, C++ has some nice features which I'd even like without malloc. OTOH I'm already horrified at the code quality problems pretty much every embedded shop faces. C++ would only make this matter worse.

As a software inclined embedded guy I also often think of what would be possible if we switched to C++. But then I think of what's probable.


Even without dynamic memory there's ton of useful things:

Namespaces, References, collection types (std::array, string_view, intrusive containers, etc.), RAII (release mutexes at scope ends), strong typing, proper encapsulation, interfaces, etc.


In general tools for constraining hardware complexity like namespaces and encapsulation are a lot less important for the project sizes where you're typically working with embedded systems. For the rest they mostly provide some benefit but that's offset by the danger of a move to a much, much larger and more complicated language in an environment where many people writing code are primarily EEs and most C++ answers they Google will provide solutions inappropriate for embedded development. And I say this as someone who was one of those EEs when he started out.


RAII is absolutely killer. Managing real time priorities with lock_guards so that you can never forget to drop priority will win over almost any grizzly old firmware engineer.


There is still a lot to be gained by strong typing even in projects where everything is statically allocated.


Many people need in memory databases and linked lists (or binary trees) to hash information and sort it.

There's a case to be made for an OO style program where I create an object and give it a chunk of memory to manage a B-tree, so I can keep my memory from being fragmented, but using C++ for that is serious overkill.


I can see people arguing for object oriented over procedural.

As I type that, I realize I've recently learned to love Go, so maybe it is my bias.


A lot of it is due to the fact that many of these chips use ancient compilers, so the C++ support isn't that good.


I don't know why. But it's true.


Here's a rant by Linus Torvalds. Take the politics and snark with a grain of salt. His technical arguments, however, are spot on.

http://harmful.cat-v.org/software/c++/linus


I'm not sure how much water his argument here holds anymore. C++ has changed A LOT since 2007; idiomatic C++11 is an extremely different language from C++03, and C++20 is almost unrecognizable to an early C++ developer.


I bet you $5 if you ask him, he'd probably say the exact same thing.


He writes C++ now, so I imagine he'd say the language has its place. https://en.wikipedia.org/wiki/Subsurface_(software) (He'd probably still say C++ is inappropriate for the Linux kernel, though.)


I was going to honor my bet here, until I did a little googling and found this during the discussion of when they ported subsurface to Qt.

"A word of warning: Linus has very strong feelings about all the things that are wrong with C++ and at times has been known to be less diplomatic than me when explaining his point of view... :-) But he made a clear statement that he is interested in seeing this port happening, as long as most of the program logic that is not UI code stays in (quote) "sane C files". So please keep that in mind as we drive this further."

http://lists.subsurface-divelog.org/pipermail/subsurface/201...


The type system is orders of magnitude better than C, which can be used to reduce bugs.


If anyone out there is an embedded developer curious about how Rust works in this space in practice, or a non-embedded developer curious about how to develop on embedded, there's a really fantastic (and free) ebook, "Discover the world of microcontrollers through Rust" (https://www.reddit.com/r/rust/comments/80doqg/discovery_disc... ), by Jorge Aparicio (the premier guru of embedded Rust development), and recently updated for 2018.


Having Rust support embedded is only half the story: we need to lobby/advocate for chip makers to support Rust. The article doesn't directly call this out, but I think it's important for Rust to truly succeed in this space.

Most embedded systems are fairly closed ecosystems built on vendor-specific stacks, and most professional embedded engineers wouldn't want to use Rust until the vendors support it.

This isn't to say the hobby world (ala Arduino/Raspberry Pi) wouldn't benefit from Rust, but I think the long-term viability depends on the ARMs, STs, TIs and Renesas' of the world starting to embrace Rust. To date I haven't seen this happen (although I'd love to be proven wrong!)

Edit: just noticed the recommendation to use a Blue Pill for Rust dev. They're cheap on eBay (just picked up 2 a few days ago)

https://www.ebay.com/itm/STM32F103C8T6-ARM-STM32-Minimum-Sys...

Looks like here is a supported Rust BSP, too:

https://github.com/japaric/stm32f103xx-hal

Anybody have any experience with this?


I know this won't assuage your concerns about officialness, though it's still worth mentioning that Rust does have community-maintained HALs (https://github.com/japaric/embedded-hal ) and tools to generate APIs from official vendor-provided SVDs (https://docs.rs/svd2rust/0.12.0/svd2rust/ ).

There's also a few boards out there that do support Rust natively: Hail (https://www.tockos.org/blog/2017/introducing-hail/ ) and Tessel (https://www.tessel.io/ ).


Time to start lobbying Arm. For instance, I think we already missed a big opportunity to have Arm develop its Firmware-M in Rust. That firmware is going to be used by billions of IoT devices in the future. Although I'm sure Arm is trying to make the C code as safe as possible, at that scale there will be plenty of bugs, especially since OEMs will get to use customized versions of it, too.

https://developer.arm.com/products/architecture/platform-sec...


The last time I used ARM supplied drivers they were nice to get things started quickly, but they were of terrible quality (at least for the smaller Cortex M microcontrollers).


> Having Rust support embedded is only half the story: we need to lobby/advocate for chip makers to support Rust.

So one thing I noticed from the micropython world is that chip makers are not going to support micropython. But there are third party companies that are supporting python on specific microcontrollers.

One problem the chip makers have right now is language frag. Rust is competing against go, python, kotlin/native, swift, javascript, and others. Rolling a tool chain is not free, because it needs to be updated over time for optimizations and code fixes.


Kotlin Native might have a chance, and it looks like they are targeting embedded. But the rest aren't good fits: Anything with a GC is a no-go for serious MCU development. You just can't optimize the code deep enough. Swift's reference count system might be interesting... would be curious if you could target Swift for something like a M4 or M3.

Rust is the only low level systems language that really has a chance to complete with C/C++ in the space: it's mature, has a great community that's sizable, etc.

(I'm talking about using Rust for professional embedded projects... I LOVE micropython as a hobby)


> Anything with a GC is a no-go for serious MCU development.

That's a really good point.

Edited to add:

But now that I think about it, Java was originally a micro controller language. Sun decided it was better for the enterprise.

https://en.wikipedia.org/wiki/Java_(programming_language)

I'm not sure if they included the GC in the original version or not.


"Vendor support" for programming tools is cancerous and hurts development. We need to actively push away from such idiocy.


You're a hardware vendor. You've created a new embedded CPU chip. Since it's new, gcc doesn't support it yet - and may never, if it gets no sales. But nobody's going to buy it if there isn't a C compiler for it. So you, the hardware vendor, kind of have to do vendor-supported programming tools, or your new chip will never sell.

So what's your alternative to vendor-supported tools? Only use the standard chips that are already supported? That doesn't sound like progress. Expect people to design in a chip where there are no tools? That's not going to fly. Expect gcc to support every brand-new chip, whether or not it has any sales? The gcc people seem unlikely to see it your way.

So, what is your alternative?


Specifically what level of support are you expecting from ARM, ST or TI?


Ideally I'd like to see Rust used to ship in production systems. I do a lot of IoT, and I don't see any of my customers green-lighting Rust until the silicon guys say "yes, we'll support Rust" and provide official HALs and support.


What about vendors wrapping their HAL's to rust with bindgen(or an improved version) ? will that be enough ?


I'm completely in this boat at the moment. We are doing a hardware refresh of our embedded platform and I'm looking at the feasability of using Rust.

I'm at the point where I think it is close to being viable, but seems very young. Most likely we will go with a platform that can support rust, and write the low level modules with the intent of using rust, but not use Rust just yet, but do a side project to asseses the viability of doing as much of the software in Rust as we can. ( and get the devs learning Rust ). I am what I like to call a multi stack developer, so I'm super comfortable with a lot of languages and platforms, but the embedded devs I work with are microcontroller focused C devs and it will take some time to be convinced, conceptually they like the idea, but as soon as they try and write something serious they are going to feel frustrated and likely make a mess.

But Rust has a LOT to offer the embedded world.


Keep in mind that at some level, something has to be unsafe, unless you've got hard mathematical proofs that your code works as compiled.


Doesn't mean you shouldn't try to lessen the amount of code that's unsafe. That way you have at least a fighting chance at getting your unsafe code bugfree.


One definition of "embedded" is: does the user treat it like an appliance or like a computer? If it's the former, it's 'embedded' even if it uses a reasonably powerful processor or does not have hard real time requirements.

I use Erlang for that kind of thing and it works well. Rust is, I think, more performant, so if that works for you and what you need, go for it.


what do you think of nerves?


I recall hearing about it a while back; looks like it has made progress? Seems worth a look!


Any progress with ti's c2000 series in rust?


Oh, ti c/c+ compiler is such a joke, not event c99 supported. I was thinking about rust to C compiler [1], but thats require c11 support.

1. https://github.com/thepowersgang/mrustc


C99 is supported but it is not the default.


I hate, hate, HATE papers or talks or anything of this nature that begin by justifying the obvious to the knowledgeable! In this case, "Why does embedded matter?" Sheesh.

Then there is that law of titles with a question where the answer is always, "no".

>Yes. As long as it supports your hardware and you are willing to put up with a less stable toolchain and less nature ecosystem in exchange for the safety guarantees.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: