Because "nm" doesn't mean nanometer anymore. Not in the context of CPUs anyway. Some time back, around the 34nm era, CPU components stopped getting materialy smaller.
Transistor count plateaued. Moore's law died.
To avoid upseting and confusing consumers with this new reality, chip makers agreed to stop delineating their chips by the size of their components, and to instead group them in to generations by the time that they where made.
Helpfully, in another move to avoid confusion, the chip makers devised a new naming convention, where each new generation uses "nm" naming as if Moore's law continued.
Say for example in 2004 you had chips with a 34nm NAND, and your next gen chips in 2006 are 32nm, then all you do is calculate what the smallest nm would have been if chip density doubled, and you use that size for marketing this generation. So you advertise 17nm instead of 32nm.
Using this new naming scheme also makes it super easy to get to 1.4nm and beyond. In fact, because it's decoupled from anything physical, you can even get to sub-plank scale, which would be impossible on the old scheme.
Edit: Some comments mention that transistor count and performance are still increasing. While that is technically true, I did the sums, the Intel P4 3.4Ghz came out 2004, if Moore's law continued, we would have 3482Ghz or 3.48 TERAHERTZ by now.
> Some time back, around the 34nm era, CPU components stopped getting materialy smaller.
> Transistor count plateaued.
No. Transistor count has continued to increase. The "nm" numbers still correlate with overall transistor density. The change is that transistor density is no longer a function purely of the narrowest line width that the lithography can produce. Transistors have been changing shape and aren't just optical shrinks of the previous node.
The fact that frequencies stopped going up was the breakdown of Dennard Scaling[1], not a breakdown of Moore's Law[2]. We're still finding ways to pack transistors closer together and make them use less power even if frequency scaling has stalled.
You appear to be confounding a few different issues here.
1) Transistor density has continued to increase. The original naming convention was created when we just used planar transistors. That is not the case anymore. More modern processes create tertiary structures of "nodes" which condense the footprint of packs of transistors. Moore's law didn't die. It just slowed.
2) Clock speed is not correlated to transistor size. The fundamentals of physics block increases in clock speed. Light can only travel ~11cm in 1 billionth of a second (1GHz). Electricity can only ever move at 50%-99% the speed of light dependent on the conductor. What's the point of having a 1THz clock when you will just be wasting most of those clock cycles propagating signals across the chip or waiting on data moving to/from memory. Increasing clock speed increases cost of use because it requires more power so at some point a trade-off decision must be made.
You're incorrect, transistor count has not plateaued. [1] Furthermore Moor's law is about transistor count, NOT clock speeds. The fact that we are not at X.XX THz has nothing in relation do with Moor's law.
In recent years the node names don't correspond to any physical dimensions of the transistors anymore. But since density improvements are still being made, they just continued the naming scheme.
Because the naming is based on the characteristics as measured against a “hypothetical” single layer plain CMOS process at that feature size, this isn’t new the nm scale stopped corresponding to physical feature size a long time ago.
"Recent technology nodes such as 22 nm, 16 nm, 14 nm, and 10 nm refer purely to a specific generation of chips made in a particular technology. It does not correspond to any gate length or half pitch. Nevertheless, the name convention has stuck and it's what the leading foundries call their nodes"
..."At the 45 nm process, Intel reached a gate length of 25 nm on a traditional planar transistor. At that node the gate length scaling effectively stalled; any further scaling to the gate length would produce less desirable results. Following the 32 nm process node, while other aspects of the transistor shrunk, the gate length was actually increased"
That's some pretty bullshit quote-mining there. You stopped right before the important part:
"With the introduction of FinFET by Intel in their 22 nm process, the transistor density continued to increase all while the gate length remained more or less a constant."
I'll repeat it for you see you seem to keep missing it: transistor density continued to increase
This isn't marketing fraud because you aren't being sold transisters like you buy lumber at Home Depot.
Instead, you buy working chips with certain properties whose process has a name "10 nm" or "7 nm". Intel et. al. have rationalizations for why certain process nodes are named in certain ways; that's enough.
>However, even the dimensions for finished lumber of a given nominal size have changed over time. In 1910, a typical finished 1-inch (25 mm) board was 13⁄16 in (21 mm). In 1928, that was reduced by 4%, and yet again by 4% in 1956. In 1961, at a meeting in Scottsdale, Arizona, the Committee on Grade Simplification and Standardization agreed to what is now the current U.S. standard: in part, the dressed size of a 1-inch (nominal) board was fixed at 3⁄4 inch; while the dressed size of 2 inch (nominal) lumber was reduced from 1 5⁄8 inch to the current 1 1⁄2 inch.[11]
Despite the change from unfinished rough cut to more dimensionally stable, dried and finished lumber, the sizes are at least standardized by NIST. Still a funny observation!
So the theory is that Intel and others do this for marketing purposes. In other words, they predict that they will sell more parts if they name them this way instead of quoting the physical dimensions. There is no other reason to do this than for marketing purposes.
That must mean, that this marketing works to some degree. Therefore, it cannot be common knowledge amongst everyone who buys PC parts. Or it might be somewhat known but still affecting their shopping choices. If it was truly common knowledge, there would be no incentive to keep naming them this way?
>I did the sums, the Intel P4 3.4Ghz came out 2004, if Moore's law continued, we would have 3482Ghz or 3.48 TERAHERTZ by now.
Comparing raw CPU speed seems like a bad metric. A new i5 clocked at 3.1Ghz will absolutely wipe the floor with a 3.4Ghz Pentium, even for single threaded workloads
Another name for this in a properly regulated industry would be fraud. It's like if your 4 cylinder car engine was still called and labeled a V8 because "it has the power of a V8."
Transistor count plateaued. Moore's law died.
To avoid upseting and confusing consumers with this new reality, chip makers agreed to stop delineating their chips by the size of their components, and to instead group them in to generations by the time that they where made.
Helpfully, in another move to avoid confusion, the chip makers devised a new naming convention, where each new generation uses "nm" naming as if Moore's law continued. Say for example in 2004 you had chips with a 34nm NAND, and your next gen chips in 2006 are 32nm, then all you do is calculate what the smallest nm would have been if chip density doubled, and you use that size for marketing this generation. So you advertise 17nm instead of 32nm.
Using this new naming scheme also makes it super easy to get to 1.4nm and beyond. In fact, because it's decoupled from anything physical, you can even get to sub-plank scale, which would be impossible on the old scheme.
Edit: Some comments mention that transistor count and performance are still increasing. While that is technically true, I did the sums, the Intel P4 3.4Ghz came out 2004, if Moore's law continued, we would have 3482Ghz or 3.48 TERAHERTZ by now.