This article is awesome and has a ton of great knowledge, but it’s a little dismissive of openembedded, and if you click through to the previous thread you’ll see a lot of people come to it’s defense.
I have been doing embedded development for 30 years, and Yocto is the best build system that I have used for managing large embedded systems. If you need to professionally manage multiple distributions across menagerie of different host and target machines, there simply is no other meaningful choice that operates at its scale.
I first encountered BitBake before either the OpenEmbedded or Yocto projects existed. Since then, I have done professional embedded systems consulting for dozens of platforms, using Yocto almost exclusively in recent years. I have done everything with it, from simple board bring ups to implementing re-usable BSP layers used by multiple product families. I might be a little biased here, but I have used all of the other popular embedded build systems too.
Other choices still have their place for smaller and simpler projects, just as one might choose something smaller than Linux for the kernel. However, Yocto and Linux have attracted unparalleled open source development and support resources. Other projects simply cannot hope to match their established output. These projects will not go away any time soon, which means LTS releases and stable infrastructure to host them, regardless of individual players or projects coming or going. What other choice exists with that kind of backing?
For commercial projects that grow beyond a certain scope (i.e. most modern full-featured embedded systems), other choices simply do not match its existing capabilities. Of course, you can always fork a full server/desktop Linux distribution like Debian, but dressing up a standard distribution in embedded clothing often requires more customization. That comes with develop and maintenance costs, though I admit the lines do start to blur at a certain point.
The BitBake/OpenEmbedded/Yocto/Poky ecosystem is huge and poses a long and steep learning curve, but it offers customization and extensibility unparalleled by any other offering that I have had the (mis)fortune of using in my career. The same is true for the Linux kernel ecosystem, right? Them's the trade-offs we make.
>I have been doing embedded development for 30 years
What's your take on the various *BSD distros in the embedded space? A lot of BSD proponents like to talk about the stability and minimalism (a proxy for resource-efficiency I guess) of BSD in comparison to Linux.
Is there any reason to reach for BSD in an embedded application these days?
The BSDs fall into the class of server/desktop systems. They would be a great choice for certain classes of products (e.g. routers) where there exists strong feature alignment, but not something I would consider suitable for producing my usual minimal custom embedded systems.
Generally, esoteric hardware support also will be better in Linux, so BSD would likely require porting or writing more drivers when bringing up a custom board.
That said, BSDs have not my strongest area, so I would be curious to see others’ experiences and opinions.
I have been doing embedded linux for 22 years and sadly Openembedded/Yocto has been the worst build system ever. It's the Gentoo for embedded(where is Gentoo nowadays), over-engineered so much for most use cases unless you work at Windriver or Amazon that deals with lots of different boards from different vendors and you have unlimited resource to work with the complexity.
If I got a codebase that uses Yocto/Openembedded, the first thing I do is to find the related patches there and convert them to buildroot/debian/openwrt/LTIB(old stuff)/etc depends on the hardware spec. I wasted too much time when there is a bug in Openembedded that could take forever to debug, layer over layer, override over override on those meta mazes, it always feels like you're fighting with openembedded/yocto itself instead of getting your work done. I since have been avoiding openembedded/yocto at all costs.
same could be said to gentoo, from experience over 2 decades and at least 5 projects involved yocto(back then it was openembedded), for the same project, yocto will take 5x more time/efforts per my measurement. yes adding stuff is fine, building is the slowest but I can wait, however if something went wrong, be prepared to debug openembdded/yocto-itself for days, this does not exist in other alternatives: you debug your own code, not the framework there.
It's over engineered, over rated, great for consultants to charge hours, terrible for companies(except for big guns) to deliver product fast.
Sorry I have a strong opinion here but it just wasted too much of my time in the past.
Buildroot is definitely much simpler, and more limited. That's a plus and a negative, depending on what you're doing with it.
Do you have one board that you need to bring up in isolation? Buildroot is great. It's easy and fast. Do you want to share configuration across multiple designs? Buildroot hits its limits pretty quickly there. And Yocto really starts to shine.
I worked at a company that used Buildroot to support 20 different boards, all in the same repo. The boards shared some level of common configuration across the whole repo, and groups of boards also shared some common configuration.
The entire thing was managed by the C preprocessor as a pre-Buildroot step, and it was a gordian knot of emergent complexity.
The catch is that Buildroot doesn't give you any tools to enable shared configuration, none at all. Buildroot doesn't know how to share build-products across multiple images, and doesn't even know how to _build_ multiple images or keep them separate without having to clean your workdir. It doesn't even track file dependencies, let alone configuration dependencies. It's terrible at knowing when a package is stale.
Yocto is much, much better than Buildroot for handling these kinds of use cases. Images are just recipes, like any other kind of recipe. You can build 8 different images by just asking for each of them by name. Yocto won't rebuild anything that can be shared, and your images can be subclasses of each other which just add or remove a few packages or a bit of configuration.
Yocto is also better at tracking/managing customizations. In Yocto, it's easy to express something like "use the upstream OpenSSH recipe, but do something custom for the config step, and add an extra post-build step, and add these 3 patches, but only add patch 3 on some designs, etc".
In Buildroot, you have to do this in a global namespace using macro-processing in GNU Make. You have no control over the parse order of included Makefiles relative to the upstream files. It's messy and difficult to modify anything, and you have to understand each recipe's quirks and hooks. And the only programming language you have is GNU Make.
In Yocto, there is an inheritance system that lets you use .bbappend files to override the behavior of any recipe you want. You can also stack multiple append files across different layers. If you need something exotic, you can express it as parse-time Python. Yocto is more complex up-front, but it pays off IMO.
As I said, to deal with multiple boards or vendors yocto might worth the pain, but the majority of users do not do 20 boards in parallel.
network gears, go openwrt, general spi-flash based boards, buildroot it, anything with a SD card and 256MB+ RAM, just use debian, if you're a BSP vendor like windriver, take on yocto then(which is why Intel 'owned' Yocto when they also bought windriver then)
Yocto is fantastic and well documented but getting started is hard and nothing makes much sense until you fully understand it. Once it clicks, though, it’s great. Totally worth the effort.
I use OpenEmbedded for my projects, but I never recommend it to beginners as the place to start. Buildroot is unquestionably easier to get started with and use for one-off projects (unless you're trying to use vendor-provided Yocto/OE layers, in which case obviously it's not an option).
The article's conclusion about Yocto is spot-on, in my opinion: If you're quickly iterating, Buildroot is the way to go.
The authors description of the two has been consistent with my experience. Buildroot is a lot easier to get going with. There might be good reasons to use yacto but if you are new to all this I wouldn’t start their.
Most of the SoCs in the article were already several years old when it was written. The article focuses on parts that are (relatively speaking) easily solderable by hand. Most new SoCs focus on small size and will come in 0.5mm pitch BGA packages or smaller, where PCBs start becoming sharply more expensive and placement requires a decent amount of skill.
> My all time favorite SoM was the CHIP Pro, but sadly at $7 (or was it $13?) it was indeed too good to be true and the company went out of business.
The CHIP was just a cheap board based on the Allwinner R8, which is basically an Allwinner A13, which is in the same product family as the Allwinner A33 in the article.
There are actually a lot of cheap SBCs based on the cheap Allwinner parts out there. The CHIP had great execution and some novel in-browser upgrade features, but I don't know how they planned to make money by selling things basically at cost. Apparently they didn't have any idea either, because they spontaneously went out of business and disappeared.
I have a pile of them in my drawer, as many as I could get my hands on (maybe 8 or 12?)
They're great. In my head they were "something that had the stuff that annoyed me about RaspberryPis fixed".
I should dig some out and see if there's a way to update the OS to something recent-ish.
I think I've still got a cluster of 4 of them, 3 running RTL SDR and one analysing the signals and pulling interesting data out. (Last data file I have here from that is Dec 2018...)
I have issues with the section "Microcontroller vs Microprocessor: Differences"
Microprocessors are just the CPU, usually with the system bus wired out. Microcontrollers are combined systems, which include a processor and usually some kind of firmware storage and RAM. You can usually differentiate them by whether their pins are address/data lines or gpio-like.
> microprocessors have a memory management unit
The 8086 didn't. Having an MMU seems unrelated to the MCU/MPU distinction...
At least, this was my knowledge until now. Is this like "crypto" where the meaning changed while i was not paying attention?
Microcontrollers may or may not have storage for firmware. Requiring an off-chip flash is nothing new. Microcontrollers have SRAM, but so do processors. That's where the firmware runs before it gets the DRAM running. Microcontrollers may include a bunch of peripherals, but so do processors. Especially so if you refer to SoCs as processors, like the article here does. And SoCs stir it up further: they have tons of peripherals, and we also have SoCs with on-package DRAM. Discrete SRAM chips exist too, I figured there must be microcontrollers that can use them (but I haven't actually looked).
NXP LPC is a big one. Most of the families have an external memory controller (EMC) block that can drive an external DRAM at a small multiple of the system clock frequency. The EMC init happens in the secondary boot, usually loaded from the small on-board flash or an offboard EEPROM.
I regularly use the 1788 + a meg or two of DRAM to hold an LCD framebuffer. The LPC has a really nice interconnect between EMC and the internal LCD controller block and will drive the display all on its own (with a hardware cursor) once it is initialized.
x86 is a classic microprocessor architecture. I'm confident it is not outdated yet, as im using a fairly modern x86 desktop right now at the moment. RAM and chipset are separate from the processor.
> x86 desktop right now at the moment. RAM and chipset
"chipset" is very different than a traditional microprocessor chipset though on that. E.g. you don't have an exposed system bus outside the CPU, but rather specialized interfaces. (And ARM systems have evolved the same way over their history)
Modern x86 CPUs integrate the DRAM controller directly (e.g. with Intel ever since Nehalem, ~2008) and not over a bus through the northbridge. From that point on I wouldn't count it if you insist on a strict definition, no - it's roughly the same as if you put a socket between a current higher-end ARM chip with external memory and the connected DRAM chips. Chipset is relegated to dealing with I/O to peripherals mostly - and nowadays the CPUs also provide PCIe lanes directly, with the chipset often adding a few more, often slower ones.
Traditional microprocessors are getting really rare, even though pretty much all the modern architectures started as them, and thus some use the term more widely. Some vendors use the differentiation between microcontrollers and application processors now, which also isn't a 100% clear line and explicit crossover models existing, but more useful today and avoids the fights over "what's a microprocessor today". (where "application processor" ~ "can reasonably run a full OS like Linux"). But that's also often limited to discussion in embedded use cases again, e.g. I don't know if they'd call a standard desktop CPU that or insist on some arbitrary level of "embeddedness"... Not that standard desktop CPUs don't end up embedded, but that's another discussion entirely.
For the article, i don't want to leave it standing as it is. Redefining "Microprocessors" as "having an MMU" is making communication about these topics very unpleasant, especially when its about Retrocomputing. Same pains as i ranted about in https://news.ycombinator.com/item?id=30278936 .
It has a x86 Microprocessor architecture, but it is not IBM PC compatible because it does not has an BIOS, instead using a more embedded-style RedBoot setup.
Maybe you should do a quick web search before talking shit.
Most of them? Pretty much every ARM board supports an SRAM boot stage, for example. If you're using U-Boot, it's pretty likely that there's a SPL running out of SRAM somewhere in there.
If you look at two examples, they may have the same "architecture", but on one, more of the architectural blocks are on-chip, and on the other, less. I don't think what is on or off the chip really changes the architecture, though it would probably change the performance, and the pin boundaries do drive other system considerations. But otherwise, the architecture is the architecture, or am I looking at it wrong?
It is a meaningful distinction. A microprocessor architecture would require support chips, like a south- and north bridge, plus RAM in additional chips.
x86 being the classic example. Here [1] is an example of a microprocessor board, you can see the MX support chips on both sides of the CPU. As you can see, the RAM (and the most of the address space-mapped things) is not on-chip. The bus is wired across the PCB.
Even x86 has on-die SRAM that can be used in boot. It's better known as L1, L2, and L3, but you can configure Intel's FSP to make it usable for boot code with the so-called cache-as-RAM feature. The technical details are different than e.g. ARM because it's cache rather than properly addressable memory, but the general idea is the same.
All the RAM you see on the motherboard is DRAM. It's a totally separate thing and happens to be what the FSP initializes after it loads stuff into SRAM.
"VPN" evolved to mean proxying service, now people are denying that my tinc setup is a VPN.
"Emulation" evolved to mean virtual machine, now people are denying that wine or xterm are emulators.
"Operating System" evolved to require virtual memory and paging, now people are denying that MS-DOS was an operating system.
"Microprocessor" evolved to mean computing chips with MMU, im waiting for people to deny that the 8086 was an microprocessor because it does not have an MMU.
The infinite shitcycle of people improvising language instead of researching.
I mentioned the shift of meaning to "virtual machine" in the post you replied to, please read it more carefully.
Emulation and Simulation are different in some regard: emulation is only imitating some aspect of a thing, while simulation imitates a thing by using (usually physical) models.
I'd argue that wine is a bit of a stretch considering that we're on the third implemention of win32 in Microsoft land as well (DOS/win32s, Win95, NT/Win32k). At that point win32 is a concept already abstracted away from a specific implementation
I totally agree with your main point though that emulation is a broader topic than is generally thought.
Not exactly. Cache as Ram is used for stack/temp storage until proper ram is initialized. Code itself runs from flash until optional bios caching is turned on in the chipset.
The TLDR is that you need to get DRAM running before you can use it (e.g. by poking at DRAM controller's registers until it knows how to talk to your stick of RAM reliably and at a comfortable speed). Until then, you're stuck with SRAM. On ARM SoCs and Intel Apollo Lake, you use addressable SRAM. On most other x86 systems you use cache as RAM. Cache is just SRAM though, and the only difference between ARM and typical x86's case is that the latter doesn't let you directly address it.
You can find examples of DDR DRAM configuration in u-boot sources, and presumably in coreboot too.
As far as I know, all of them. It's possible that there are historical systems where DRAM configuration is fixed, but I wouldn't know about it. Any modern system requires rather complicated configuration to get the DRAM going.
Counterexample: The IBM PC was a microprocessor system which does not do it - the BIOS runs from ROM only and sets up the DMA controller for the refreshes. No DRAM, no RAM used.
Excluding the caches, this is how x86 functioned for quite a while, i don't know until when.
Microprocessors don't require internal RAM to bootstrap, because they expose their bus and its simple to map a ROM to the starting address, which can be executed from directly.
Ok, right, that really has a legacy smell to it. If you can find the pinout for a modern Intel CPU, it has dedicated pins from DRAM controller to the DDR. It's not on some external "all-purpose" bus. Whatever bus you might have is internal to the CPU. Same thing on any modern ARM SoC with DDR support.
The question is then, should we start calling modern Intel CPUs microcontrollers? That's bound to cause even more confusion.
If anything, the things people today refer to as microcontrollers have more to do with your legacy microprocessor than they have with a high end CPU or SoC.
The problem is that by your definition, every processor manufactured in the last 30 years should actually be called a "microcontroller"
Microprocessors going back to the Intel 486 have more on-chip RAM than many microcontrollers sold today have. CPUs have built-in firmware ROM for loading boot code. Also, CPUs haven't had a system bus with address/data pins for like 30 years — somewhat ironically, the only chips that have that are microcontrollers. Northbridges haven't existed for 15 years, and southbridges are basically just PCIe-to-USB/SATA/Ethernet/I2C adapters. And many mobile CPUs that Intel and AMD make have on-chip I2C, SPI, UART, GPIO peripherals.
In my article, I'm trying to differentiate between processors that can run a modern multiuser operating system like Linux and microcontrollers that runs bare-metal code, since that's a question I get asked a ton. After carefully considering all the options, the MMU seemed like the most obvious place to draw the line. Every CPU in the last 40 years (since the Intel 286) has had an MMU, while no microcontroller ever made has had one. I'm not sure why that seems so unreasonable to you.
It is unreasonable because you redefine "Microprocessor" in a way that the 8086, the one that made the word popular, isn't one anymore.
Surely odd, but tolerable. But what is worse that people will read your article, think the MMU is what makes a microprocessor. And upon hearing that the 8086 has no MMU, they will deny that it is a microprocessor. Which would be a logical conclusion from your article.
Microprocessors in the traditional sense no longer exist. All modern microprocessors are actually SoCs (yes, even Intel ones at least to a partial extent). And all microcontrollers are SoCs by strict definition, but we don't usually call them that.
So really the more practical terminology difference should be SoC vs. microcontroller.
Agreed. To add to that - I think what Jay was trying to highlight is the difference between chips that can generally run Linux (which generally requires an MMU) and those that run an RTOS. Is this an exact line of demarcation with no blurry edges? No. But it seems to work for me. Maybe a better way to think of this is - does the system have external DRAM (if so run linux) or not - if so run RTOS.
Our of curiosity - does anyone still sell a (by strict definition) a true "micro-processor"?
The most recently sold micro-processors used in a mass market product that come to mind are the CPUs of the Wii/PS3/Xbox 360 era consoles (in particular the PPC750CL in the Wii is a pure CPU). The Wii U then glued three of those together into a three-core die, so if you're willing to consider just the CPU die and not the entire MCM, that one was also a pure CPU.
Digi-Key still shows the 750CL in stock, but it's obviously an obsolete product at this point.
There is no global standard of definitions, what you might be seeing is the difference in the educational system in different parts of the world.
We know definitions change over time because slang words come and go and someone's vocabulary can be used to identify people online or identify their knowledge.
Whatever, but if someone of the "Distinction Criteria is existence of MMU" crowd comes and claims the 8086 is not a microprocessor, because it doesn't have an MMU (which would be a logical conclusion from the article). i'll probably not be respectful.
The clearest distinction today is the lack of an MMU. That's just a useful metric to go by that neatly divides SoCs into "things intended to run bare metal/RTOS code" and "things intended to run Linux", that are two different product groups with significant differences.
Obviously this wasn't always the case, e.g. a good 15+ years ago I put ucLinux on an old ISP router I had lying around for fun, and it was an ARM946 without an MMU. But that class of device no longer exists today for all practical purposes, and so it's easy to use the existence of an MMU to decide what to call them.
The 8086 may have been a microprocessor, and so was the 80186, but you know where you find 80186 cores today? Microcontrollers. Specifically, some video processing chip manufacturers (e.g. stuff that goes into monitors and DP-HDMI adapters) seem to like using V186 cores still.
Well these are a couple of online definitions:
MicroController - a control device which incorporates a microprocessor.
MicroProcessor - an integrated circuit that contains all the functions of a central processing unit of a computer.
The problem with these definitions is a control device and computer are two different things, so I wonder if he is applying an engineer's perspective/definition to the 8086 even though many know the 8086 were used in the early computers. This is a bit like arguing whether a microprocessor is a microprocessor because of the inclusion or exclusion of one or more instruction sets, ie a basic intel cpu with a minimum instruction set or a xeon intel cpu with a much more extensive number of instruction sets.
Or are you picking up on an AI masquerading as a blogger and have highlighted something an AI wouldnt pick up on from online technical resources?
>. In Linux, you get a first-class network stack, plus tons of rock-solid userspace libraries that sit on top of that stack and provide application-level network connectivity.
JimTCL is very fun for that, and it would run on limited machines.
Ok, not totally TCL compatible, and no TK, but you have (WIP) SDL2 bindings, and SDL2 would work perfectly on a display handled by fbdev/KMS.
> but blending and other advanced 2D graphics technology can really bog things down
SDL2 can be really fast at low res displays such as 320x240.
If not, EFL from Enlightenment for sure could that fast enough.
Why to build Linux you always have to start with Linux, is there anyone who bootrapped it from freebsd or any other OS, minimalistic systems like https://niedzejkob.p4.team/bootstrap/ , etc?
Comparing to https://github.com/fosslinux/live-bootstrap your pseudo minimalistic arch pacstrap or Debian debootstrap are over fat of binary seeds and still you cannot get rid of Linux env in building the crusial packages.
This article is awesome and has a ton of great knowledge, but it’s a little dismissive of openembedded, and if you click through to the previous thread you’ll see a lot of people come to it’s defense.