From the twitter thread, someone said it was fabbed by TSMC at 40nm.
I wonder, with the cutting edge moving to smaller and smaller processes (5nm, etc)... how much of the older/larger capacity can be reused? Meaning, are we about to see a ton of cheaper chips like this RP2040 that are using older/slower/obsolete fab capacity? Or is the same equipment able to be used across a family of process sizes?
In other words, once TSMC has X amount of capacity on 40nm, is that capacity always around and fixed to 40nm? If so, I imagine that creating new chips that don't need to be super fast will just become super cheap.
Microcontrollers being on larger processes like this is nothing new. As you mention, it's a way to keep using older fabs, but that's not the only reason.
40nm offers tons of performance for microcontrollers and the die area can still be reasonably small. You tend to want to have lots of variants of microcontrollers for different applications, so mask costs matter. And you tend not to want exotic, very low core voltages and do want high voltage tolerances and strong drivers on the IOs, which mean relatively big geometry-- no matter what your minimum feature size is.
Also, sleep power consumption is important, so low leakage is nice, which is easier to get on large geometry.
40nm is a quite recent move for MCUs, and the mainstream majority can still be anywhere in between 90nm, and 180nm.
Only very few, top of the line MCUs ventured to 40nm, like ESP32 which had no other option because of WiFi eating gates, or ST's MCUs with 3D accelerator on board.
Problem for MCUs is that MCU specific features start to cost too much more in IP, design time, yields, and fab service cost below 90nm. Things like embedded flash, per-model mask memory, SRAM, non-CMOS cells, RF, and other analogue circuitry.
Below 28nm conventional eFlash, and many other things stop scaling, and only proprietary solutions are on the table at the moment.
BayBal, since you're an expert in Semi, I wonder if you can fill in the blanks?
EbenUpton(Raspberry Pi's CEO) , on Twitter:"We get ~20k die per wafer""
A Tsmc 40nm wafer costs about $2300.
How much do you estimate the full chip manufacturing cost for this would be ?
I wonder, because Eben Upton talked about "business model hacking" in regards to this chip, so they may want to do some interesting stuff in the mcu market.
> BayBal, since you're an expert in Semi, I wonder if you can fill in the blanks?
An expert? Ahahh, I never even had formal education in the field, just been trying to enter it, and start studies in it for a few years.
My only real experience with ICs was with a company developing a fancy synchronous rectifier chip what was capable of doing few more tricks with the output waveform besides rectification, and that was mostly just hanging around, and doing complete trivialities like routing, or minimal layout wiggling. I was more useful there as a coffee porter.
> How much do you estimate the full chip manufacturing cost for this would be ?
I don't know how many wafers they buy. I don't know whether they ordered masks from TSMC, or somebody else. I don't know how short they want lead times to be. I don't know if they want to have any device inspection provided. I don't know if they have any agreements on repeated runs, or a flexible capacity purchase. I don't know if they order test, and packaging from TSMC.
From a man who was on Allwinner's original A10 chip team, I heard that the most bare bones 65nm run without mask cost, inspection, or packaging was possible at 1k wafers at $2400-$2500 in 2013-2014 by paying cash 1y in advance.
Today, I'm not even sure if clients are even allowed to, or can order masks on the side these days for latest processes.
The universal advice I heard is that you don't get into 300mm game without at least $10m, or better $20m if you have a brand new, untested design.
The manufacturing cost is tiny. It's .2cm squared, so at 0.05 defects per cm^2, you expect to lose about 0.2% to defects. So-- die manufacturing costs are 2300/(20000*.996) == around 11 cents.
Small run packaging, along with distribution costs, etc, will dominate.
The real cost is amortization of R&D and mask costs.
Don't forget software costs. Raspberry Pi foundation is known for excellent software and Pico seems to be intended that way too.
And that's money well spent, I think. If I'm a hobbyist and am planning on using a grand total of three microcontroller boards, than paying $2 x 3 for "blue pill boards", and then spending hours and hours debugging some obscure problem... that doesn't look like all that great of a deal, compared to paying $5 x 3 and making some use of polished examples and a nice build system:
The fab will eventually be upgraded. I design semiconductors and I have worked at two companies that had their own fabs (I didn't work in the fab)
Back in 1997 the new technology was 0.25 micron (250nm) but we still operated fabs at 0.35, 0.6, 0.8, and 1 micron. I believe these 4 lines operated at 6 different factories.
The next year we shut down the 1 micron fab. The building itself still had all the air filtering and cleanroom facilities. The old equipment (steppers, testers) was removed and new equipment for 0.18 micron was installed. I think this took about a year. At the end we still had 6 factories but the building that used to have all the old stuff was now the cutting edge fab.
Usually the equipment is rented, and returned to the owners.
I spent 4 years in Taiwan working for OSE, a semiconductor packaging company (cutting up silicon wafers, bonding wires onto them, and putting them into little plastic SD cards or RAM chips labelled "Made in Taiwan").
The pick & place machines and ovens weren't owned by OSE - they were on a 30-year lease from other companies in Japan, Germany, etc.
Software updates would void the warranty/lease conditions. One of my tasks was to write a program to measure yields for factory monitoring. This meant figuring out how to use SQL on Windows 2000, with no .NET framework or additional libraries.
My guess is that when the equipment is too old, the original owners either take them back for salvaging, or sell off the pieces as scrap. I went to many scrapyards in Kaohsiung though, and found many treasures like $10 bicycles or old consumer gadgets, but no factory-level machines.
Older processes available for cheap are not a new thing. They do indeed stay around until TSMC decide they're no longer worth running. It's not always cheaper as a larger process wastes more wafer.
I believe Eben Upton mentioned in a Twitter thread somewhere that they were getting thousands of chips off each wafer at the current node size? Only had to order like 20 or 40 wafers for the first production run, something like that.
Microelectronics cutting edge has been moving to smaller processes for the past 70 years. This is nothing new. If anything, the progress has slowed down.
Some fabs get closed down/rebuilt for a different process, some stay and produce parts that don't benefit from higher densities. Last I checked some ICs still get produced in 0.35 um processes. You don't need billions of transistors for an opamp.
I’d just like a documentary about fabs. It seems like an insane secret world of some of the most expensive places on earth. I’d love to know what they do with old ones etc.
This chip is very small and it does not benefit in any possible way from more advanced technological process. And using more advanced process means additional drawbacks and cost increases.
I really hope they do. I just bought a little ESP32 board, and while I'm having fun with it, it's a bummer that it's using an oddball ISA (xtensa). I really want to build for it in rust, but the xtensa llvm and rust forks are beta-ish quality, and there's no working wifi driver yet.
With the RP2040 based on ARM, rust should work on it after someone works up a peripheral support crate, which shouldn't be too hard if the RPi Foundation releases SVD files.
I have a bunch of these. I use them for prototyping before I put together a PCB design but they are fully functional if you want to build a product on them.
All eternity really. Even eighties era equipment is still being bought, and resold. Quite a number of early nineties fabs still work producing pretty much same things for 30 years straight.
How does one of those die get made at all? It's such a minuscule small scale and it has such detail I cannot even comprehend how those components get "stamped" there, it's black magic.
You know how a microscope makes a small image really big? You can also do the same in reverse -- take a big picture and project it really small. Use that idea to project your design on to your silicon, coated in a series of photosensitive chemicals that etch away the silicon until you have what you want.
Easier if you're not in a big hurry; then you can use dilute solutions. E.g., concentrated hydroflouric acid catalyzes dissolving flesh. But you can buy it in 1% solution at the hardware store.
It's not really "Stamped". It's more of a photographic process. My basic understanding is that they apply a photographic reactive chemical, expose it. From there they apply another chemical process to form an oxidation layer, and then use an acid process to "etch" away material creating channels of positively charged particles. Then they can apply a mask, and throw tiny amounts of metal which will become the gate.
The process of photolithography [1] involves a lot of complex optics, and interestingly you find a lot of lens manufacturers involved in it since it involves a lot of steps where light (e.g. lasers in this case) gets passed through a set of very precise optics until it hits the wafer and etches in the correct pattern from a larger 'reference copy'.
You get information from your fab (libraries about sizes of things so you
can do layouts and simulation with very expensive software (https://www.synopsys.com) (fun exercise, try to get a price on their website...)
Often you send your design to the fab to get some samples to test back.
Wikipedia has a decent summary about how the chips are made
It is "just" photo-etching, with the big wrinkle that 40nm is way below the wavelength of light, so all sorts of clever tricks with diffraction have to be done to make it work.
I'm suprised that even the CPU cores are implemented just as a sea of standard cells. It is common to have them be macros containing a clearly separate datapath and control logic.
On the other hand, the fact that the prototyping was done on an FPGA makes this less surprising. Sea of cells is certainly easier to update, and less work than any form of manual datapath when migrating from FPGA to ASIC.
Yup. This is pretty clearly a case of getting it to run on an FPGA and using a commercial FPGA to ASIC service instead of spending a lot of time in synthesis and validation.
Mask costs can be cheaper this way, too, because some layers can be invariant between different parts.
"Commercial FPGA to ASIC service" usually refers to Altera/Intel's HardCopy or Xilinx's EasyPath. EasyPath is where Xilinx sells you defective FPGAs for way cheaper, but with just enough of the right parts of the fabric working that your design can still fit and work properly. HardCopy is where Altera sells you a small number of metal mask layers to implement your design on a much smaller/faster sea-of-gates style ASIC that mimics the non-reprogrammable parts of the target FPGA.
The Raspberry Pi Foundation did neither of these things. They tested their design on an FPGA, which is common, and then used standard cell libraries and IP blocks provided by the fab and their partners to get through the normal ASIC workflow.
It turns out it's just easier and faster to tell the tools to automatically place and route the core complex than to do a careful floorplan. Also given that the RP2040 is apparently "very overclockable" (the PIO bitbanged HDMI output bumped the clock to something like double the nominally guaranteed maximum), the merits of doing it this way seem sound.
I'm guessing it works on Chrome, but it also would sort of surprise me that a maker website wouldn't test against Firefox. Maybe I'm giving them too much credit.
It definitely could be made more discoverable, maybe with a bigger "handle" around it to show draggability. But I really love the idea of a combined educational tool and spam deterrent.
One of my regrets from when I worked at Intel was that we had die plans on 36x48 printouts in the lab - and they were beautiful, I wish I had taken them home - I would love to have some framed on my walls now.
My pair arrived last Friday. With bare metal coding I have its LED blinking. Now I'm just thinking how to bring up a very small SMP capable UNIX clone in it. If the PDP/11 could do it I see no reason why it can't be done on the Pico.
Fuzix exists for 6502 and Z80 processors and these are tiny!
ESP32-S2 has both WiFi and USB. It's essentially (the classic) ESP32 cut to a single core, 320KB RAM, and with Bluetooth removed; however it does include USB.
There's also ESP32-S3 which is dual core, with more RAM, USB, and Bluetooth, but I'm not sure any boards are available yet.
All of that not to be confused with ESP32-C3 which is a single-core RISC-V without USB.
And yes, their naming is confusing.
edit: Adafruit has some boards with ESP32-S2. If you get a random board from elsewhere, there are good chances it will come with a USB-to-serial chip, which you don't want if you need to use native USB. So check.
I wonder, with the cutting edge moving to smaller and smaller processes (5nm, etc)... how much of the older/larger capacity can be reused? Meaning, are we about to see a ton of cheaper chips like this RP2040 that are using older/slower/obsolete fab capacity? Or is the same equipment able to be used across a family of process sizes?
In other words, once TSMC has X amount of capacity on 40nm, is that capacity always around and fixed to 40nm? If so, I imagine that creating new chips that don't need to be super fast will just become super cheap.