They have already announced a newer series based on single/dual 1.5GHz Cortex-A35 cores (64-bit) + a 400MHz Cortex M33 core called STM32MP2 with mass production starting 2024Q2.
Nice! There aren't too many SoCs using the lower end a
ARM cores like A3x. Weird that they avoid mentioning the specifics of the GPU. I hope it's something with decent open-source drivers.
Haven't these been around for a while? I seem to remember it's been at least a couple of years. Where's the news? Where's the context that makes this chip noteworthy?
It has its warts, but I had the opposite experience in general. Bringing up peripherals on ST has been easier than all other vendors I’ve worked with (TI, renesas and the older PICs) TI was the worst code I dealt with.
Anyone complaining about STCube has never used anything worse.
My nomination for worst code generator would be MPLab Harmony. I was unfortunate enough to be one of the first victims of the PIC32MZ line, which didn’t have a middleware library, or non-harmony code examples. It was autogenerated code, or read the data sheet and figure out every peripheral yourself. The code generator was so broken even the most basic modifications to get example code running on a custom board would result in non-compilable code. The experience was so frustrating I just respun the board with a different micro.
I’ll never touch any MCU that uses MPLabX again. Bonus points to Microchip for destroying atmel studio in favour of MPLab, now I have yet another family of microcontrollers I will not touch.
One thing I can say about STCube is the code it generates works, and you can always use it as a jumping off point for tweaking the generators code for your exact use case.
One genuine complaint is that the catch all interrupt handling is often too slow actually use, so you end up overwriting the default handlers, or becoming very proficient with DMA.
Another very important thing is to never intertwine your code with the autogenerated code. Keep the application code and the hardware code seperate, and have an actual “HAL” layer in your code. It’s an easy way to prevent your code from getting nuked when you reconfigure, and if you’re not happy with the autogenerated code just sub in your own functions. Best of both worlds.
Vendor tools like this exist to lock in developers and teams who can't, or won't build their own environments. There's a short on-ramp for embedded programmers who aren't necessarily software experts but the price is they dig themselves even deeper into an undesirable situation in the long term.
Except that ST has open-sourced their toolchain, and their libraries [0]. I can write C or assembly code on my Linux laptop and get it on my ST devices without anything proprietary from ST.
If nothing else the vendor-specific crappy toolchains do support the vendor’s HW well, so that’s something they need to do so that devs don’t have to start their projects with implementing the low-level support by themself.
Very good? Zephyr generally wraps the hardware SDKs for the things built into the microcontroller. And then their own portable drivers for any outside peripherals, like I2C/SPI/Onewire/etc.
I haven't found STM32Cube to horrible, I've certainly have seen worse. Its works for getting the hardware up and running quickly, and then refactoring away from the generated code. The underlying libraries aren't too bad.
Yeah it's fine, for a while it was refusing to start for me on archlinux but after deleting some library as per the suggestion of one guy on their forums it's working pretty good.
Shops that actually make their living building and shipping quality and competitive embedded stuff, mostly used IAR in my experience.
Default free tools are great for students, hobbyists or selling low volume widgets without warranty to other hobbyists, but if your livelihood depends on it, you pay for IAR, Keil or Lauterbach tools, no questions asked.
The stability, technical support and timely bug fixes, compiler documentation (especially along the undefined edge-cases of the C language), testing, debug and trace capabilities, are worth their weight in gold.
I have learned to hate IAR Embedded Workbench in the two years that I've been forced to use it. It will just randomly refuse to give me a function/method list or claim it doesn't know where a value is defined. I'm not much of a fan of Keil products either.
I would use VScode for everything if I could. But for the life of me, I can't get debugging working in VS Code, using the IAR plugins and a j-Trace/j-Link. Works fine with ST-Link though, go figure.
Can't. Our development process requires using validated tools and we would have to validate a new version. A fix for this would not be worth the validation cost.
You need hot-swappable barf bags to read the output from their damn code generators. Despite being a code generator >90% of the output is dead code hidden inside a giant ifdef maze.
Instruction set is ARM, sure, but to drive peripherals you need to use their horrible C code or write the driver stack yourself by reading hundreds of pages of datasheets.
You can also use third party libraries. I have written projects using ChibiOS and libopencm3 and their driver model was actually OK to use, although for most peripherals you still need the datasheet to understand the exact capabilities of the device.
On the lower end of this spectrum, there's the Linux capable Nuvoton NUC980 which goes down to a LQFP64 package with 64 MB SDRAM. Uses an older, slower ARM926EJ-S core which is probably good enough for quite a few applications.
The nice thing about stm32mp1 is that it seems to have reasonably good docs and mainline software support, so there is possibly less vendor crap to deal with. Although that is based solely on my reading of their website, so I don't know the what the reality is.
I've been shipping non-IoT embedded Linux products for over a decade and, honestly, I don't give a shit about what version of Linux it's running as long as the necessary drivers are compatible and working well. USB, ext4, SD/MMC, SPI, I2C, PWM, GPIO, backlight, framebuffer video...all pretty mature now.
Agonizing about being mainline is a distraction from getting things done, IMO. ARM Linux will never be fully up to bleeding edge.
> Agonizing about being mainline is a distraction from getting things done, IMO.
Sometimes this hinders you and your customers from getting things done. Mainly because vendor kernels are, generally, really bad. Second to none other than their userspace drivers, and I want my users to be generally running a well-supported, solid software instead of whatever ${VENDOR} ships.
That's pretty basic stuff, but SPI/I2C drivers for example have issues /all the time/. It's just nice to get the fixes from mainline and integrate easier.
Other than that, custom SoCs are much more complex than the bread and butter you mentioned: NPUs, VPUs, signal processing chips etc are all a gigantic pain in vendor kernels, simply because the code is garbage.
Watts steam engine, maybe the most iconic symbol of industrial revolution, was commercialized in 1776, and similarly many other key developments happened during 1700s
I think it's a German made-up thing in analogy to Web 2.0. The decision-makers wanted to make clear that it's quite the quantum leap and therefore jumped directly to 4.0 (also fourth industrial revolution).
"The term was coined by Henning Kagermann, Wolf-Dieter Lukas and Wolfgang Wahlster and first made public in 2011 at the Hanover Fair."
An incredibly amount of hype was/is generated in Europe in regards to this term. Not really the good kind, in my opinion, a lot of buzzwords being thrown around in the higher research centres and a lot of public money being spent on not-so-well-defined terms.
ST’s MCUs used to be somewhat price competitive, but since the chip shortage you’re a fool if you’re designing anything new with them. Might as well burn your money.
Interesting - the new higher end series doesn't have the second Cortex core. I used the older 15x series a few years back and the selling point was being able to run real-time stuff on the familiar ST Cortex M4 and run Linux on the A core. The new series doesn't have the M4 cores so why lump it in with the older MP1?
Is setup on these more like ST's Cortex-M (wire debugger to SWD lines; connect the debugger to PC, flash a bare-metal program that interacts with peripherals using MMIO and prints to CLI using RTT), like a PC or Raspberry Pi (Plug in GPOS installation USB stick, but maybe it's on a custom PCB), or neither/a mix?
It sounds like maybe the flash/RAM is offboard? Q/OctoSPI or something else?
They're not hobbyist friendly - 16x16 'tiny' BGA packages, so there's no real benefit over other ARM SBCs, they're massively slower than almost everything else atm. The advantage of STM32 was always that you could prototype on a SBC, but the chips were accessible enough to use in hobbyist projects.
Although since I recently discovered [1] that the R-Pi has an exposed memory interface bus (Secondary Memory Interface, SMI) on its GPIO, I’m far more inclined to use them instead. Getting low-latency/high bandwidth I/O into these “A” (rather than “M”) processors has always been the thing that made using an I.mxRT, STM MP1 or Zynq attractive to me. Turns out the lowly Pi has had that capability all along…
Coupling a Pi via SMI with a low-end (read: cheap) FPGA (Ice40K, or Efinix, hell go to town and get an Efinix T20 with 195 io for $8) makes a potent device, IMHO.
You can still get an rpi zero W for $15 [2], and even the quad-core zero 2W if you shop around is ~22€ [3], which blows the STM away in terms of CPU, video out, ease of development etc.
They’re not for hobbyist. Wrong shelf. They’re for real products and not SBCs. That’s why you also don’t have NXP parts in the hobby area. For low volume you buy SOMs and for big runs design complete printed circuit boards.
It's possible, yes. But it's not really usable. I tried Linux on the STM32F469 discovery board and the only takeaway was "Cool, it works". Feels more like a proof of concept than anything you'd want to use in a product. It's slow and everything takes forever, barely any of the interfaces work and basically all of your resources are gone just to run Linux itself.
The key difference is that F4, F7, and embedded chips in general don't have an MMU. This is a strong requirement for "regular" Linux (and other desktop OS's).
I'm looking to see if there is a table for power consumption with the thing all "lit up" and operating at various speeds but it doesn't have it in the datasheet.
Same question. You still can't reliably get H7s unless you have an in with the company. (Although other models started coming back gradually over the past few years)
Armv7 is fine if you're just running some command-and-control type stuff.
In most of the target applications of these chips, you'll typically have a couple of real-time tasks. This will run on a dedicated core, or in kernel space, or with Xenomai. The rest will be firmware update services, telemetry, and maybe Bluetooth or a web GUI. Stuff where the execution speed doesn't really matter, as long as it's "fast enough". Armv7 is more than sufficient for that kind of thing.
The STM32MP2 family will probably replace the MP1 once it's in full production, but that doesn't mean the MP1 is a bad chip to use in new designs today.
The chip's competition is NXP's i.MX line of parts. If I got a good price from ST on it (and a production guarantee), I'd consider using it in a number of new products, even though the main CPUs are 32-bit Armv7 cores.
> This will run on a dedicated core, or in kernel space, or with Xenomai.
Most the USP of these is that they have an M-core which is usually where such tasks would live.
In automation it's not unusual to find controllers where the entire system lives on an MCU, and you quickly find that it's difficult to keep up with network requirements. I have devices where the MCU is 20+ years old, and the embedded TLS implementation hit a constraint 10-15 years ago.
This looks like the ideal use-case for these; your actual application lives in the M-core as it always did, but your network, TLS etc into an A-core that's much more suited to them, and where "fullsized" implementations are much more readily available.
https://blog.st.com/stm32mp2/