I agree it's the standard and the only thing that actually works (author's words), but it's still a pleasure for me to write and have to deal with C (for embedded). I'd be desperate if I have to be forced to deal with huge different paradigms because pointer problems or insert-your-C-rant-here. C is not going to be replaced on embedded any moment soon.
I’m a C developer working in the SP routing space.
Personally, I do not enjoy debugging memory corruption issues. I especially would not enjoy doing this under customer pressure...
Of course, in some spaces, C is basically the only option. But when you have options, and you’re working with a large enough codebase, C is a terrible choice.
I have an anecdote to share. I work in embedded space for a living. A web developer (which is quite funny) from another team somehow convinced our director to use Rust for a critical process that involved a lot of concurrent processing. Ok, I said, and began developing that process in Rust.
I estimated that it took me about 10x the time to implement something than it would have taken if I did that in C. The reason for that could definitely be because I'm not a Rust expert at all. That's fine though because I will happily invest more time in programming if it saves me hours of debugging later. Plus, the claim that "if it compiles, it works" was enough for me to fight the compiler for hours if it gives me no trouble during runtime.
Fast forward a week, we have that binary deployed to a non-critical set of our field devices. A few days later, we get reports of that particular binary crashing. I logged into one of those devices and fetched the logs. The binary was reporting a panic on an MPSC channel's tx send. There was nothing useful in that error message that would point me to the root cause. I had no tools to attach to a running process because I'm on a device with an ancient kernel.
To fix the situtation "temporarily", because the customer is getting infuriated, we redeploy our old C binary. It has been there since. I basically now say "fuck off" to anyone who tells me to use Rust because it doesn't work if it compiles.
"If it compiles it works" is just not true, just a very bad description for a true phenomenon. If there weren’t people repeating that seriously, you’d be attacking a strawman. People do say that, but that’s on those people, not Rust.
>Do I really know how much cycles/stack it takes to do std::sort(a.begin(), a.end()); in that specific platform? No, so I cannot trust it.
I also don't know how many cycles it takes for my implementation of quicksort apart from checking the output of a specific compiler and counting instructions. C is not, was not and will never be a portable assembler.
> I also don't know how many cycles it takes for my implementation of quicksort apart from checking the output of a specific compiler and counting instructions.
On any modern out-of-order CPU, that doesn't get you close to determining the dynamic performance. Even with full knowledge of private microarchitectural details, you'd still have a hard time due to branch prediction.
The embedded/micro space is not out-of-order, broadly. Memory access latency may be variable, or not, but I would charitably assume OP knows their subject material and is either using cycles as a metaphor, or actually works with tiny hardware that has predictable memory access latency.
On most (small, I'm not talking about mini linux) embedded systems, instruction counting will tell you how long something takes to run. In fact, the compilers available are often so primitive that operation counting in the source code can sometimes tell you how long something will take.
Depends a lot on the compiler and target arch. You'll miss out on a lot of stack accesses, or add too many. You don't get around looking at the final executable if you want good results. And for more complex targets, in the end you need to know what the pipeline does and how the caches behave if you want a good bound on the cycle count. Of course assuming you're on anything more complex than an atmega, for which op counting might be enough. I work in the domain; lots of people do measurements, which only give a ballpark but are bad since you might miss the worst case (which is important for safety critical systems, where that latency spike in the wrong moment might be fatal). Pure op counting is bad since the results grossly overestimate (eg you always need to assume cache misses if you don't know the cache state, or pipeline stall, or DRAM, or...). Look at the complexity of PowerPC, this should give you a rough idea what we're usually dealing with (and yeah, I'm talking embedded here).
To me that "sometimes" feels like "I can wrestle some bears with my bare hands, e.g.a Teddy bear" ;-)
"Most" embedded ARM? Cortex-A8 and smaller do not have OoO execution. Cortex-A9 is a 32-bit up-to-quad core CPU with clock of 800MHz-2GHz and 128k-8MB of cache. That's pretty big. I guess a lot of this is subject to opinion, but I don't think smartphones with GBs of RAM when I think of embedded systems.
All of Cortex M is in-order and only M7-- still somewhat exotic-- has a real cache (silicon vendors often do some modest magic to conceal flash wait states, though).
Alignment requirements are modest and consequences are predictable. Etc.
About the most complicated performance management thing you get is the analysis of fighting over the memory with your DMA engine. And even that you can ignore if you're using a tightly coupled memory...
Its slowly getting there, but no, right now there is a very limited number of target platforms that are supported, and the ecosystem is still very small and immature. You have to remember that there is a huge number of microcontrollers out there, with exotic architectures you have never heard of, and that LLVM definitely doesn't support.
I'm not sure embedded Rust would even meaningfully exist without the work of Jorge Aparicio.
Yeah. Even on the best supported platforms it is very immature. If you want to use embedded rust you kinda have to let language drive your choice of platform, which isn’t ideal.
I wouldn’t use it at work yet, but it’s coming along.
* Your microprocessor has a C compiler and standard library, as does every processor you might ever switch to. All the hardware documentation that isn't tables in a PDF will be in C.
* Your target's static analysis tools and interactive debuggers will all support C.
* Every RTOS and embedded library/filesystem/whatever will support (and likely be written in) C.
* All experienced embedded developers are fluent in C.
* Nobody ever got fired for choosing C for an embedded project.
Which, in a way, is an advantage. I know (much of) what to look out for. There are tools that can help me with some of those issues. There are techniques that avoid some of them, and there are people who are expert in many of them.
But if I pick some other language, it won't have those problems. It will have other problems. (There is no language that does not have problems.) I won't know what to avoid doing. There may not be tooling to help with them. The techniques for avoiding them may not be widely known. I may not be able to find people who know how to handle them.
To me, "well known problems" may be better than "not well known problems". More predictable, at least. The "not well known" problems have to be significantly better to be worth it. They probably have to be proven significantly better. That means either that someone else has to prove them better, or else I have to have a project that doesn't matter much that I can use as a testbed.
1) There is no need to. C already has all we need to build our systems. The rest is seen as overhead/over-complication.
Regarding Rust, I try to keep up to date about its progress. Unfortunately most of the diseases that Rust cures are not much of a trouble in embedded. My last UB, memory leak of loose pointer happened years ago. It will happen again and when it happens, I'll debug it. That's it. I'm not afraid of UB or dealing with pointers, even if there are 10K Rust users trying to FUD me. I know what I'm doing. Every embedded/kernel/driver developer know what they are doing. When shit happens, that's it, no big deal. You plug your debugger and solve the issue. It's not a nightmare that chases us in the middle of the day.
FUD alone is not enough for switching. So, why should I start thinking that a variable cannot change, because it's a constant (wasn't it a variable?), unless it can mutate, so it's a constant variable that can change because now is mutable?
Or constrain myself into borrow-checker torture for a thing I can do in a couple of instructions?
WHY?
2) This is not about a language problem, is about solving a programmer problem with language. If C = math, then you cannot do math simpler/better because today's mathematicians are sloppier. Or because bosses pressures people to deliver crappy products.
I'm not a genius. I'm far from it, and if any seasoned C programmer challenges me I'll probably run away. But embedded/kernel/driver development is harsh, so if a developer thinks that he/she cannot make it because language, then it's mostly about searching for an excuse. Time to change jobs.
The key is to think that a lot of people did (and does) a lot with so much less, for 40 years now. It's not a language problem. People have to learn to deal with it.
I was there too 20 years ago, when every C++ developer was afraid that they would lose their jobs because C#. It never happened. C++ is still one of the most used languages.
3) In my case, there are official libraries from manufacturers you have to use. Sometimes receiving customer support depends on if and how you use those libraries. All those libraries are in C. All the support is in C. All the examples are in C.
Yes, I know there is that engineer that has a Github repo with a library that works fine with that STM32 for that specific language, that now is getting support for embedded so in 5 years we could maybe put something in production. But, not for now.
"Every embedded/kernel/driver developer know what they are doing"
If that was true, then we wouldn't see more memory bugs found every time academics test a new analyzer or testing tool on open code programmed in C or C++. Microsoft said 70% of the problems they saw were memory safety. Linux has a ton of them. Even OpenBSD has many security fixes for memory safety. Your claim is mythical in the general case even if some individuals working on small codebases can pull it off.
> Every embedded/kernel/driver developer know what they are doing.
Yet we still see security issues in all of those. I'm not saying that everything should be re-written in a different language, but C developers saying "I'm a good developer, all those safety mechanisms would hold me back" doesn't hold water considering all the security vulnerabilities we see that would have been prevented if they had used a language with better safeguards.
This is a common way of thinking about it, but nobody really knows. It's purely a conjecture. There is no data to back it up, and it just appeals to common sense.
Should everybody drop C/C++/whatever and rush into the Rust train because Rust people has conjecture?
I ask the opposite question. What would happen if the only programming language left is C? Wouldn't we become better programmers and raise the bar so high that the bug count drops to 0?
> What would happen if the only programming language left is C? Wouldn't we become better programmers and raise the bar so high that the bug count drops to 0?
Given that C was the dominant programming language for UNIX applications for over a decade, I think we can look to history for an answer to this question. And I believe the record shows that the answer is "no."
>Should everybody drop C/C++/whatever and rush into the Rust train because Rust people has conjecture?
The way these things usually work in practice, the evidence that a new paradigm improves things usually builds up slowly. There will never be a point at which someone proves mathematically that C is obsolete. Instead, the gentle advantages of other options will get stronger and stronger, and the effectiveness of C programmers will slowly erode compared to their competition. At first only the people who are really interested in technology will switch, but eventually only the curmudgeons will be left, clinging to an ineffective technology and using bad justifications to convince themselves they aren't handicapped. Is Rust the thing that everyone except the curmudgeons will eventually switch to? Who knows, but if you don't want to end up behind the industry then it might pay to try it out in production to see for yourself. If you don't make room for research and its attendant risks you will inevitably fall behind.
I'm sorry you see things in terms of competition and curmudgeons. But I see your point, it's just another type of FUD: don't stay behind and adopt Rust because you'll be a curmudgeon.
Not exactly, I'm saying that if you stay behind and don't adopt something, where that something is whatever the industry switches to after C, you will eventually be left behind. Of course it is also possible (and likely for many people) to die or retire before that happens. It's not like C is going away any time soon.
> What would happen if the only programming language left is C? Wouldn't we become better programmers and raise the bar so high that the bug count drops to 0?
Is this a rhetorical question? Clearly the answer is no.
It seems like there is a big difference between a "mathematical certainty that certain bugs will not occur" if you play by the rules and just be a better programmer so that you don't write errors. Not that you won't have bugs in Rust but it seems like we should move towards having our tooling do more of the heavy lifting in ensuring correctness. I don't believe you will ever have the bug count drop to zero. I do believe in mathematical certainties though.
It says something of our profession that we have all of our products come with a legal document that effectively says it's not our fault if the product doesn't do what it's supposed to do.
We should just outlaw those damn statements (and binding arbitration for consumers) and see how the market naturally evolves. Don't force standards on programmers, don't regulate in some novel way, just get rid of the stupid disclaimers and let the old-fashioned legal framework return to primacy.
A language is a tool and if you're saying the problem is with people using the tool wrong, it's only half true. New tools are being invented to handle problems both old and new and discrediting them with what amounts to 'get off my lawn' is counter-productive. There are problems that are impractical to solve without tools designed for them and safe pointers are a great example. Yes, you'll hook up your debugger and yes, you'll fix the problem. I once fixed a dangling pointer problem after two 12h debugging sessions and while it felt great, I'd rather not do that again. I know there are people hunting these for months, so you could say I haven't seen anything yet, and I'll agree - but it's an argument in favor of better tools, not the other way around.
Of course it isn't. It's portable (-ish) assembler. The only thing replacing C will be even more in the C spirit. Adding more checking in system-programming-friendly ways is the only way I can think of for improving the situation.
I know people often refer to C as portable assembly, but really it's not. That sounds like the sort of thing gets repeated by people who neither know C nor assembly. C is incomparably higher level than assembly and lacks the quirkiness modern instruction sets have which are focused on specific odd things modern processors do well.
If you want the same runtime environment of asm without the tediousness of asm development, and a reasonable optimizer to save you more tediousness, you end up at C. I count that as close enough.
I hope that with IoT we get some commoditization and standardization in this space. Though ARM doesn't give me much hope. I don't get why there couldn't be 1-2-3-5 standard architectures and 10-20-50-100 hardware configurations that cover the full spectrum of embedded configurations.
We've had 32 bit x86 CPUs since 1985, surely we could produce a 100Mhz one for cheap enough that nobody would need to use 8 bit ones in 2020? I know that I'm just daydreaming and companies are stingy...
People have more or less converged on ARM for large things and 8051 for tiny things. I suspect the remainder are hiding in culturally isolated niches; I don't really know why people are still targeting PIC16, for example.
There's also the DSP subset where people need to write inner loops in highly platform-specific assembler. e.g. my employer Cirrus has our own "Coyote" architecture for which we have a C compiler: https://statics.cirrus.com/pubs/proDatasheet/CS4970x4_F1.pdf
"Cheap" isn't just about BOM price, the power consumption matters too.
So basically an Intel Pentium from 1994, so from 26 (!) years ago. Size: 90 mm2, maximum power consumption: 10W, price at launch: $699.
Buuuut: production process 600nm.
Today every Intel CPU uses a 14nm production process, but let's go for a cheaper option and "use" the 22nm.
So the gate size is ~30x smaller today. And I know that the relation between the gate size and the die size is not linear (2x smaller gate size usually leads to a die size which is smaller by more than 2x), but let's go with the 30x, to keep things simple.
For the power I have no idea, but I'd assume that the power consumption would go down at least 30x.
Price: hard to say, but considering how hardware evolves, definitely more than 30x cheaper :-)
So a Intel Pentium with modern technology could look something like this:
Size: 3mm2 (probably less), maximum power consumption: 0.3W (probably way less), price at launch: $30 (I'd argue that it would be closer to 0.3$ :-) ).
I could be way off in the weeds here...
My guess is that it's just a matter of existing tooling, expertise, human resistance to change, companies wanting to a) not risk anything and b) to nickel and dime everything.
I'm arguing for an x86 because that way you could throw any kind of modern tooling at it. ARM or MIPS would probably be better candidates.
Then you need external RAM, storage, a video card or companion chip for a display, a chip for holding the BIOS and a power supply (if not many) for all that. x86 as is, is not good enough for my wrist watch.
You can use a small ARM with integrated flash, RAM and LCD controller. Does it need MMU for Linux? There is family of products for that. Or use another family for bare-metal systems.
You can see how the thing starts to complicate regarding architectures/platforms.
Well, I just used the Pentium example because it was easy to source the data :-)
But my main point was to dump super old and limited architectures when these days we can economically use modern architectures we use everywhere else (x86/ARM/MIPS, whatever). If we wanted to, we could literally hoist designs from 20+ years ago and use newer production technologies to make them embeddable.
Using mainstream tech stacks is very empowering. Updated compilers everywhere, many programming languages and stacks with huge communities everywhere, modern debugging tools at low/no cost, fast debug cycles, etc.
But there's little interest in this because which self-respecting hardware maker would commoditize its own products? :-D
Hoisting old designs is unnecessary when you have x86 processors like Intel Atom, which are exactly that, and all the versions of ARM Cortex-R3 and R0.
The problem is not the processor, it's the lack of MMU and/or special DMA or interrupt engines and special GPIO. No kernel is potentially ported to such custom infrastructure, and if you have a very tiny flash, few of the ones which could will fit. (Say, target L4, porting Fiasco or OKL4.)
Oh, I must have missed that. They seem a bit too expensive for what I'm saying, I imagine these things have to cost cents to be worthwhile (50-70 cents or so).
Every single one of your projection is still way off compare to lowest cost embedded system. $0.3 is expensive when people are really, literally counting pennies.
Lots of people don't enjoy using C for various reasons, but generally programmers don't get to choose what language they work in, they get paid to work in whatever language is needed.
Whether programmers like it or not for non personal projects doesn't actually matter too much. If you don't like using C, then don't take jobs programming in C. Just don't whine when that makes it more difficult to get a job.
It does matter if the reasons why they do not like it are good.
Decision makers should listen to such input instead of deciding on programming language by fiat or convention. Including hardware choices. Of course this should be weighted by market availability and cost of both hardware and programmers.
Typically the reason C is used because there's no other toolchain available for said embedded device, except assembler, and they do not wish to invest in building or extending one. Those devices often do not have a kernel with POSIX-like syscalls to adapt an existing full blown libc, nor provide one.
Some embedded chipset libraries are also written in C sprinkled with copious assembly, so the language that's used with these has to be very easily interoperable.
>It does matter if the reasons why they do not like it are good. Decision makers should listen to such input instead of deciding on programming language by fiat or convention.
This is rarely the case. Almost all programming tasks aren't free choices out of thin air except for start-ups or small software companies. Far more often than not you have to stick with the existing language for compatibility, or to minimize maintenance costs, or in short because someone else already decided on the language before you arrived.
>Including hardware choices. Of course this should be weighted by market availability and cost of both hardware and programmers.
In those rare cases when you get an opportunity to make such a choice, sure. However, those aren't the only criteria either.
This is entirely orthogonal to the fine article and basically just a statement of personal preference. Having preferences is fine, and sure, if you like C you can take umbrage at some of the more flowery characterizations in the README, but it's not really the point of the language. There are lots of neat ideas in TFA that would be interesting to discuss; no one is interested in litigating the fate of C on embedded for the Nth time.
> I agree it's the standard and the only thing that actually works (author's words),
C++ also "actually works" and, thanks to its much greater ability to support fluent abstractions, ends up being safer in practice than raw C code. You can use C++ for low-level system components. Did you know Android's libc is actually written in C++?
I agree it's the standard and the only thing that actually works (author's words), but it's still a pleasure for me to write and have to deal with C (for embedded). I'd be desperate if I have to be forced to deal with huge different paradigms because pointer problems or insert-your-C-rant-here. C is not going to be replaced on embedded any moment soon.