Even the density figures are mostly lies. For example TSMCs N5 is closer to 134MTr/mm2 in the real world[1]. It can also depend on the chip design itself. Some designs may scale better than others because not all transistors get the same density improvement from new nodes. Ultimately simple metrics for predicting performance are sort of breaking down at the scale these chips are currently being manufactured at. For example, their TSMCs N3 increases transistor density for logic transistors but SRAM sees no improvement over N5[2].
Is this some image of how the devices you'd build with SRAM (ie. big uniform array of bitcells) are very regular? I feel like logic being not-as-uniform makes the talk about density scaling different, but I can't exactly articulate how
It’s in large part due to the wire density… a 6T SRAM bit cell by itself could scale transistor density well, but the word lines and bit lines are the limiters as they typically have to go up 4 metal layers. Also, each read/write port you add to the SRAM macro increases the size exponentially rather than the linear scaling of the bit cells.
Am I reading this correctly that at 7nm intel had >2x the transistor density of TSMC and their 7nm node is still ahead of TSMCs 5nm node on that measure? That‘s not what I‘d have expected given everything one hears abt. intel —- have they fallen off mainly by not progressing beyond that node, or along other dimensions?
Intel 7nm on that chart is what has been rebranded Intel 4, ie the process that still isn't in high volume production. Meanwhile TSMC is in high volume for N3 (the next iPhone chips are currently using up that capacity).
They have fallen off, but not by nearly as much as the node names indicate. TSMC has been "cheating" with their node designations for years (c.f. 3nm is not remotely a (5/3)^2==2.8x increase in density vs. 5nm, it's more like 30-50% denser), where Intel stuck with the notionally numeric scales for longer.
But Intel has given up now too, the "7nm" line in that chart is actually showing the process they're marketing as "Intel 4" (denoting that it sits roughly where a "4nm" process would vs. their competitors).
What does nm measure these days? I hear it's just a marketing term at this point, but "1.8" seems like a specific enough number that it must be measuring something.
Originally it measured a specific dimension on the transistor. Then transistors started shrinking non-uniformly, so they adjusted it -- "how big the transistors would be at equivalent density if they had shrunk evenly". That's an easy thing to play with though, and so you see more and more departures from reality.
It refers to the theoretical dimensions that a planar transistor would have to achieve the same density. Nobody uses planar transistors these days. Instead, 3d multi channel transistors are being used. The number is supposedly referring to the gate length of the theoretical planar transistor. If your transistors can reuse the same gate to drive three different channels, that means the effective gate length would be one third but once you open this can of worms, the numbers can be chosen arbitrarily to mean whatever you want.
The problem is that we've reached a point where not all features scale down evenly. So while some transistors might be 2 nm on a new process, others are largely unchanged from 3 or 5 nm.
How much you get out of a new process is now highly dependent on how you design your chip and what you want it to do.
This is why AMD is using multiple processes in a single CPU package these days. Some elements of their processors get little or no benefit from smaller nodes, so it's cheaper to make them on an older node.
I can't decide if that's an excellent analogy or a terrible one. On the one hand it's very true, on the other hand I suspect the fraction of HN readers who are familiar with tank armor is fairly small.
For those not familiar, RHA is the type of armor used circa world-war II. Armor has gotten better on a per-inch-thickness with improved technology. Modern tanks (and modern anti-tank munitions) are often rated as to the equivalent thickness of RHA protection (or penetration) they provide.
For example, Explosive Reactive Armor (ERA) can have a very high RHA rating if you use a traditional HEAT (High-Explosive Anti-Tank) round as your benchmark, but a tandem HEAT round can almost completely eliminate the per-thickness protection advantage that ERA offers. So a tandem HEAT round might have a lower RHA penetration rating than a traditional HEAT round, yet be more effective against an ERA with a very high RHA rating.
So, if you have a spiffy new tandem HEAT round, and want to fake^H^H^H^H present good numbers, you find the highest RHA rated ERA armor that it can penetrate and claim that as the RHA rating of your projectile. Even though it might not be able to penetrate that many inches of actual RHA, or possibly even modern composite armor with such an RHA rating.
Which is to say something called RHA was the metric used for tank armour measuring, but it got gamed more and more and is now unreliable on the face of it except for rough jists?
Do you think that marketing will drop to pico or zero point whatever nano?
If it's meaningless then why does Apple spend a shitload of money on it? While the nm number doesn't translate to a feature anymore, the nm number is not meaningless. It is more an estimate of behavior. They behave as 2nm transistors would behave in terms of spec.
TSMC's 3nm process is still more dense than their 5nm process, just not by as much as the name would suggest. The name is meaningless because it no longer gives you an accurate idea of the average feature size.
The customer doesn't care about density they care about performance. If you could somehow build a magical low density but performant chip that's what people would buy.
This is correct from a logical point of view but not a practical point of view. With a few major exceptions transistors basically are one shape and do one thing. So if you can scale the process node you scale the density, if you scale the density you scale the performance. It’s not like 1 transistor could you give you 0.01 flops and another could give you 0.02 flops, a transistor does 1 thing and it works or doesn’t (with a few caveats), and the second order effects don’t really overcome that.
Oh, transistors are not one shape across nodes. Not anymore, and not for a long time. The planar version taught in beginning EE classes has evolved into various complicated 3D shapes.
Yeah I know, I was putting caveats all over the place because first order, things are simple, but there's lots of complication. But in reality density == performance, and transistors are mainly just interchangeable. Yes, you could theoretically get a less dense transistor with a fantastic slew rate or some such property, but it's very unlikely it'll matter in a conversation about digital logic.
Intel never shipped any 7nm, they stayed on their 10nm which is as you can see far behind the 5nm TSMC node that Apple used. So Apple benefiting from a far superior process is real.
Intel renamed their nodes to align better to the foundries. 10nm is now Intel 7, 7nm is now Intel 4, which would broadly look more aligned in that table if it were updated.
As reference: TSMC 3nm is ~290 million transistors/mm2 (MTr/mm2).
https://news.ycombinator.com/item?id=27063034https://www.techradar.com/news/ibm-unveils-worlds-first-2nm-...