Hacker News new | past | comments | ask | show | jobs | submit login
STM32 family grows to microprocessor/Linux level with STM32MP1 (st.com)
107 points by unwind 10 months ago | hide | past | favorite | 71 comments



They have already announced a newer series based on single/dual 1.5GHz Cortex-A35 cores (64-bit) + a 400MHz Cortex M33 core called STM32MP2 with mass production starting 2024Q2.

https://blog.st.com/stm32mp2/


Nice! There aren't too many SoCs using the lower end a ARM cores like A3x. Weird that they avoid mentioning the specifics of the GPU. I hope it's something with decent open-source drivers.


Pretty sure the GPU would be this, or some evolution of this: https://www.st.com/resource/en/product_training/STM32MP1-Sys...

If you dig around, you see it's a Vivante GPU: https://www.st.com/resource/en/programming_manual/pm0263-stm...


Haven't these been around for a while? I seem to remember it's been at least a couple of years. Where's the news? Where's the context that makes this chip noteworthy?



Didn't know that, and I thought I had a fairly close eye on the STM32 family. Thanks! A poster above mentioned that STM32MP2 has been announced, even.


Ah! That is why they are supporting only OpenGL ES 2.0.


STM provide the most garbage source code ever. I've never seen more of an abomination in my life than the STM32Cube.


It has its warts, but I had the opposite experience in general. Bringing up peripherals on ST has been easier than all other vendors I’ve worked with (TI, renesas and the older PICs) TI was the worst code I dealt with.


Anyone complaining about STCube has never used anything worse.

My nomination for worst code generator would be MPLab Harmony. I was unfortunate enough to be one of the first victims of the PIC32MZ line, which didn’t have a middleware library, or non-harmony code examples. It was autogenerated code, or read the data sheet and figure out every peripheral yourself. The code generator was so broken even the most basic modifications to get example code running on a custom board would result in non-compilable code. The experience was so frustrating I just respun the board with a different micro.

I’ll never touch any MCU that uses MPLabX again. Bonus points to Microchip for destroying atmel studio in favour of MPLab, now I have yet another family of microcontrollers I will not touch.

One thing I can say about STCube is the code it generates works, and you can always use it as a jumping off point for tweaking the generators code for your exact use case.

One genuine complaint is that the catch all interrupt handling is often too slow actually use, so you end up overwriting the default handlers, or becoming very proficient with DMA.

Another very important thing is to never intertwine your code with the autogenerated code. Keep the application code and the hardware code seperate, and have an actual “HAL” layer in your code. It’s an easy way to prevent your code from getting nuked when you reconfigure, and if you’re not happy with the autogenerated code just sub in your own functions. Best of both worlds.


> Anyone complaining about STCube has never used anything worse.

I would like to cast a vote for IAR Embedded Workbench being worse.


Vendor tools like this exist to lock in developers and teams who can't, or won't build their own environments. There's a short on-ramp for embedded programmers who aren't necessarily software experts but the price is they dig themselves even deeper into an undesirable situation in the long term.


Except that ST has open-sourced their toolchain, and their libraries [0]. I can write C or assembly code on my Linux laptop and get it on my ST devices without anything proprietary from ST.

[0] - https://github.com/STMicroelectronics


praying for Zephyr to succeed


How is their overall hardware support?

If nothing else the vendor-specific crappy toolchains do support the vendor’s HW well, so that’s something they need to do so that devs don’t have to start their projects with implementing the low-level support by themself.


Very good? Zephyr generally wraps the hardware SDKs for the things built into the microcontroller. And then their own portable drivers for any outside peripherals, like I2C/SPI/Onewire/etc.


I haven't found STM32Cube to horrible, I've certainly have seen worse. Its works for getting the hardware up and running quickly, and then refactoring away from the generated code. The underlying libraries aren't too bad.


Yeah it's fine, for a while it was refusing to start for me on archlinux but after deleting some library as per the suggestion of one guy on their forums it's working pretty good.


I wrote my own HAL in Rust. Was a pain, but worth it now.

Cube is still useful for clock trees, pinout visualizations etc.


Shops that actually make their living building and shipping quality and competitive embedded stuff, mostly used IAR in my experience.

Default free tools are great for students, hobbyists or selling low volume widgets without warranty to other hobbyists, but if your livelihood depends on it, you pay for IAR, Keil or Lauterbach tools, no questions asked.

The stability, technical support and timely bug fixes, compiler documentation (especially along the undefined edge-cases of the C language), testing, debug and trace capabilities, are worth their weight in gold.


I have learned to hate IAR Embedded Workbench in the two years that I've been forced to use it. It will just randomly refuse to give me a function/method list or claim it doesn't know where a value is defined. I'm not much of a fan of Keil products either.

I would use VScode for everything if I could. But for the life of me, I can't get debugging working in VS Code, using the IAR plugins and a j-Trace/j-Link. Works fine with ST-Link though, go figure.


>It will just randomly refuse to give me a function/method list or claim it doesn't know where a value is defined.

Have you reached out to IAR regarding these issues. Normally they reply and give you workarounds or fixes if the issue is reproducible.


Can't. Our development process requires using validated tools and we would have to validate a new version. A fix for this would not be worth the validation cost.


Yes, I love their CPU but you have to commit to read the data sheet. I write my firmwares mostly with direct register access and light libraries.


Thankfully it is optional. Even the stupid bits which are badly documented like the USB stack.


You need hot-swappable barf bags to read the output from their damn code generators. Despite being a code generator >90% of the output is dead code hidden inside a giant ifdef maze.


There are plenty of alternatives to cube. It’s just arm.


Instruction set is ARM, sure, but to drive peripherals you need to use their horrible C code or write the driver stack yourself by reading hundreds of pages of datasheets.


You can also use third party libraries. I have written projects using ChibiOS and libopencm3 and their driver model was actually OK to use, although for most peripherals you still need the datasheet to understand the exact capabilities of the device.


On the lower end of this spectrum, there's the Linux capable Nuvoton NUC980 which goes down to a LQFP64 package with 64 MB SDRAM. Uses an older, slower ARM926EJ-S core which is probably good enough for quite a few applications.

https://www.nuvoton.com/products/microprocessors/arm9-mpus/n...


The nice thing about stm32mp1 is that it seems to have reasonably good docs and mainline software support, so there is possibly less vendor crap to deal with. Although that is based solely on my reading of their website, so I don't know the what the reality is.


Octavo has matured their STM32MP1 SIP.

https://octavosystems.com/octavo_products/osd32mp15x/

I've been using their AM335X SIP for a while now and it's really nice. I'm starting to get away from the SOM mindset and making my designs smaller.


I've used some SBCs based on AM355X and it seems quite good but the custom kernels were quite a pain


Can you describe the pain points?

I've been shipping non-IoT embedded Linux products for over a decade and, honestly, I don't give a shit about what version of Linux it's running as long as the necessary drivers are compatible and working well. USB, ext4, SD/MMC, SPI, I2C, PWM, GPIO, backlight, framebuffer video...all pretty mature now.

Agonizing about being mainline is a distraction from getting things done, IMO. ARM Linux will never be fully up to bleeding edge.


> Agonizing about being mainline is a distraction from getting things done, IMO.

Sometimes this hinders you and your customers from getting things done. Mainly because vendor kernels are, generally, really bad. Second to none other than their userspace drivers, and I want my users to be generally running a well-supported, solid software instead of whatever ${VENDOR} ships.

> USB, ext4, SD/MMC, SPI, I2C, PWM, GPIO, backlight, framebuffer video...all pretty mature now.

That's pretty basic stuff, but SPI/I2C drivers for example have issues /all the time/. It's just nice to get the fixes from mainline and integrate easier.

Other than that, custom SoCs are much more complex than the bread and butter you mentioned: NPUs, VPUs, signal processing chips etc are all a gigantic pain in vendor kernels, simply because the code is garbage.


> to meet the increasing demand for security in Industry 4.0

What’s Industry 4.0? Also what’s 1, 2 and 3.0, actually…


1.0: The original industrial revolution of 18-19 centuries. Steam, railways, powered loom and lathes, etc.

2.0: Since 1870s and on, cheap steel via Bessemer process, electricity, large-scale chemical synthesis; unified parts, mass production.

3.0: Since, say, 1940-50s, advanced control systems, electronics, early computers, pervasive plastics, power semiconductors, advanced alloys and advanced welding, simple CNCs.

4.0: Since, say, 2000s, highly integrated electronics, pervasive computers / MCUs, highly programmable and precise CNC machines, everything through CAD software; carbon fiber, advanced glues, 3D printing.

https://sustainability-success.com/industry-1-0-to-4-0-2-3-r...


You mean 19-20th centuries for 1.0, surely, there wasn't much industrial revolution in the 1700s.


Watts steam engine, maybe the most iconic symbol of industrial revolution, was commercialized in 1776, and similarly many other key developments happened during 1700s


Newcomen invented his steam engine around 1712, and Watt’s steam engines were a thing of the 1760s to 1800 when his patent expired.


Industry 0.1-beta? People were working on things like steam engines in the 18th C, plus you had some pretty serious water-powered mills then.


I think it's a German made-up thing in analogy to Web 2.0. The decision-makers wanted to make clear that it's quite the quantum leap and therefore jumped directly to 4.0 (also fourth industrial revolution).

"The term was coined by Henning Kagermann, Wolf-Dieter Lukas and Wolfgang Wahlster and first made public in 2011 at the Hanover Fair."

https://de.wikipedia.org/wiki/Industrie_4.0


An incredibly amount of hype was/is generated in Europe in regards to this term. Not really the good kind, in my opinion, a lot of buzzwords being thrown around in the higher research centres and a lot of public money being spent on not-so-well-defined terms.


Definitely a lot of hype around something that has been known as CIM for a long time :)


1 is steam coal steel. 2 is Ford and such. 3 is early digital 4 is post 2010 buzzwords (IoT, smart whatever, gene editing, AR/VR)

It's a totally arbitrary distinction.


ST’s MCUs used to be somewhat price competitive, but since the chip shortage you’re a fool if you’re designing anything new with them. Might as well burn your money.


Interesting - the new higher end series doesn't have the second Cortex core. I used the older 15x series a few years back and the selling point was being able to run real-time stuff on the familiar ST Cortex M4 and run Linux on the A core. The new series doesn't have the M4 cores so why lump it in with the older MP1?


Is setup on these more like ST's Cortex-M (wire debugger to SWD lines; connect the debugger to PC, flash a bare-metal program that interacts with peripherals using MMIO and prints to CLI using RTT), like a PC or Raspberry Pi (Plug in GPOS installation USB stick, but maybe it's on a custom PCB), or neither/a mix?

It sounds like maybe the flash/RAM is offboard? Q/OctoSPI or something else?


When I tried their drivers, eclipse based IDE and microcontroller pin configuration generator (cube something) 2-3 years ago it worked well on Linux


Tell us the price son


The cheapest one, STM32MP151AAC3 is $6 on LCSC: https://www.lcsc.com/product-detail/Microcontroller-Units-MC...


They're not hobbyist friendly - 16x16 'tiny' BGA packages, so there's no real benefit over other ARM SBCs, they're massively slower than almost everything else atm. The advantage of STM32 was always that you could prototype on a SBC, but the chips were accessible enough to use in hobbyist projects.


Or $19 for the easy module, inc flash & emmc… [0]

Although since I recently discovered [1] that the R-Pi has an exposed memory interface bus (Secondary Memory Interface, SMI) on its GPIO, I’m far more inclined to use them instead. Getting low-latency/high bandwidth I/O into these “A” (rather than “M”) processors has always been the thing that made using an I.mxRT, STM MP1 or Zynq attractive to me. Turns out the lowly Pi has had that capability all along…

Coupling a Pi via SMI with a low-end (read: cheap) FPGA (Ice40K, or Efinix, hell go to town and get an Efinix T20 with 195 io for $8) makes a potent device, IMHO.

You can still get an rpi zero W for $15 [2], and even the quad-core zero 2W if you shop around is ~22€ [3], which blows the STM away in terms of CPU, video out, ease of development etc.

[0] https://www.myirtech.com/list.asp?id=726

[1] https://www.eevblog.com/forum/projects/state-of-raspberry-pi...

[2] https://rpilocator.com/?country=US

[3] https://rpilocator.com/?cat=PIZERO2


They’re not for hobbyist. Wrong shelf. They’re for real products and not SBCs. That’s why you also don’t have NXP parts in the hobby area. For low volume you buy SOMs and for big runs design complete printed circuit boards.


> chips were accessible enough to use in hobbyist projects

Sure that's nice, but hobbyist use doesn't move the needle for ST Microelectronics even a tiny bit.


.8mm pitch <300pin BGA is still significantly easier to deal with than the typical <.5mm pitch 500-1000+pin monster packages SoCs usually come in.




I thought Linux was already supported in mainline for some cores (F4, F7) https://elinux.org/STM32 - probably a different audience though.


It's possible, yes. But it's not really usable. I tried Linux on the STM32F469 discovery board and the only takeaway was "Cool, it works". Feels more like a proof of concept than anything you'd want to use in a product. It's slow and everything takes forever, barely any of the interfaces work and basically all of your resources are gone just to run Linux itself.


Cortex-M4 doesn’t have virtual memory, the only Linux you could run there would be a ucLinux fork


nommu have been mainlined for at least 8 years


The key difference is that F4, F7, and embedded chips in general don't have an MMU. This is a strong requirement for "regular" Linux (and other desktop OS's).


I'm looking to see if there is a table for power consumption with the thing all "lit up" and operating at various speeds but it doesn't have it in the datasheet.


How's availability on these? We abandoned ST32 a year or two ago during the shortage and selected the GigaDevice GD32 instead.


Same question. You still can't reliably get H7s unless you have an in with the company. (Although other models started coming back gradually over the past few years)


Using the legacy ARM ISA. Unfortunate.


> Using the legacy ARM ISA. Unfortunate.

Said no one in the industry, ever.


Armv7 is fine if you're just running some command-and-control type stuff.

In most of the target applications of these chips, you'll typically have a couple of real-time tasks. This will run on a dedicated core, or in kernel space, or with Xenomai. The rest will be firmware update services, telemetry, and maybe Bluetooth or a web GUI. Stuff where the execution speed doesn't really matter, as long as it's "fast enough". Armv7 is more than sufficient for that kind of thing.

The STM32MP2 family will probably replace the MP1 once it's in full production, but that doesn't mean the MP1 is a bad chip to use in new designs today.

The chip's competition is NXP's i.MX line of parts. If I got a good price from ST on it (and a production guarantee), I'd consider using it in a number of new products, even though the main CPUs are 32-bit Armv7 cores.


> This will run on a dedicated core, or in kernel space, or with Xenomai.

Most the USP of these is that they have an M-core which is usually where such tasks would live.

In automation it's not unusual to find controllers where the entire system lives on an MCU, and you quickly find that it's difficult to keep up with network requirements. I have devices where the MCU is 20+ years old, and the embedded TLS implementation hit a constraint 10-15 years ago.

This looks like the ideal use-case for these; your actual application lives in the M-core as it always did, but your network, TLS etc into an A-core that's much more suited to them, and where "fullsized" implementations are much more readily available.


Not everything needs the latest ISA. An end user don’t care which ISA you have, as long as the product work.

It also benefits from established support in the whole ecosystem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: