Hacker News new | past | comments | ask | show | jobs | submit login
Intel Processor Names, Numbers and Generation List (intel.com)
44 points by todsacerdoti on Feb 4, 2021 | hide | past | favorite | 52 comments



To anyone complaining, just check on USB to see how bad you can go. This is even worse than MS Xbox

We had USB 3.0 at 5 Gbits.

Then it was renamed to:

- USB 3.1 Gen 1 - 5 Gbits

- USB 3.1 Gen 2 - 10 Gbits

Then it was renamed again (RENAMED AGAIN) to:

- USB 3.2 Gen 1 - 5 Gbits

- USB 3.2 Gen 2 - 10 Gbits

- USB 3.2 Gen 2x2 - 10 Gbits

To make it even more confusing, they decided to create special marketing labels:

SuperSpeedUSB

SuperSpeedUSB 10 Gbps

SuperSpeedUSB 20 Gbps

Also there is a thunderbolt somewhere there and USB 4 is coming. Maybe they will call it SuperSpeedUSB 2 Gen 2 ¯\_(ツ)_/¯


It seems that naming conventions are generally an exercise in afterthought, coupled with a marketing campaign, which is supposedly expected to adhere to the underlying technical specifications. There are ample examples, where they seem to be deliberately tacked on, and named in a such a way, as to cause maximum confusion, and get progressively worse.

USB-IF elevated the haphazard standards, naming schemes and version numbers, beyond just mere confusion, annoyance or a nuisance; it even managed to blind side the Pi 4 design team for the first version. Nonetheless, it has provided the unscrupulous an opportunity to exploit the marketplace, where tolerances in power make it extremely important to pick the right cables and adapters.

They are not alone ─ SD Card Association have also sowed doubts in the minds of consumers, by splintering the naming convention. Although, HDMI is also playing a similar game, albeit it is slightly easier to follow. However, WiFi Alliance is making some efforts to help decipher basic capabilities of a device by a layperson via their certification. Then there is 3GPP..

https://www.sdcard.org/developers/sd-standard-overview/

https://www.wi-fi.org/discover-wi-fi

https://hdmi.org/spec/hdmi2_1

https://www.usb.org/developers


It should really just have been 3.0 for 5gbit/s, 3.1 for 10gbit/s and 3.2 for 20gbit/s - nothing hard to understand, same deal as usb 1 and 2 beforehand


That's how I still refer to them in my head.

I'm still baffled why they didn't do that.


> USB 4

You mean USB4.

Also it will copy and paste the Gen names, except this time the USB4 versions won't be the same as the USB 3.x versions.


Thank you for your correction! I will follow the USB4 naming saga with utmost attention!


The USB-IF standardized names (low speed, full speed, high speed, super speed) aren't that bad, the biggest problem is that they are not used nearly enough or consistently. Instead people use those those non-standard and ambiguous USB X.Y Gen Z labels, which is something USB-IF iirc explicitly recommends not to do.


The trouble is, the USB group should have got the memo 15 years when no one used whatever USB 2 was supposed to be called (high speed?), and instead just called it USB2.

They should just formularise the three steps of USB 3 as 3/3.1/3.2, which is pretty clear.

Of course the USB group gave us USB C, so we’re probably doomed.


Oh!! So there are even more names? Which is "full speed" and which is "super speed"?


Don't forget about "superspeed+". Which I thought they got rid of but Wikipedia says it still shows up some of the time, and it doesn't even mean a specific speed any more!


Straight from the horse's mouth:

> NOTE: SuperSpeed Plus, Enhanced SuperSpeed and SuperSpeed+ are defined in the USB specifications however these terms are not intended to be used in product names, messaging, packaging or any other consumer-facing content

https://www.usb.org/sites/default/files/usb_3_2_language_pro...

same text also appears on USB 3.1 guidelines, so that hasn't changed.

Also regarding SuperSpeed speed ratings, the recommendation is pretty unambiguous:

> To avoid consumer confusion, USB-IF’s recommended nomenclature for consumers is “SuperSpeed USB” for 5Gbps products, “SuperSpeed USB 10Gbps” for 10Gbps productsand “SuperSpeed USB 20Gbps” for 20Gbps products


Why were those terms created and disseminated at all? "SuperSpeed Plus", "Enhanced SuperSpeed", and "SuperSpeed+" have no reason to exist outside of a marketing context.


Archive.org has older ones if you're curious about those (...MQ etc.): https://web.archive.org/web/20160324083053/https://www.intel...


Not the most useful list when it doesn't include the microarchitecture codename, e.g. Skylake, Coffeelake, ..., which is usually mentioned instead of the generation's "number".


Intel seems to have some aversion to using the codename once products are released, only using them when referring to pre-release. Intel Ark, for example, refers to them as “Products formerly [codename]”.


At least the desktop versions are clear enough, although in the last years there have been experiments (non-HT versions) and shenanigans (8086K, a "100% marketing non-product").

If we one really wants to point out obscure (arguably dishonest) denominations, AMD, at least in the past, has done a lot of rebranding in the GPU department, and for at least a few years, it was impossible to know, based on the name, which generation a GPU belonged to, since they mixed them across the classes (R3/R5/R7).


The biggest scam that Intel, AMD, and nVidia are all guilty of is naming the desktop parts the same as laptop parts. It isn’t the same thing at all!

If you’ve only used thin and light laptops for the last ten or so years, you’ll find that desktops, almost any* desktop will run circles around you.

That is the real travesty.


What Intel mobile CPUs have same name as desktop? Usually for Intel product segment is identifiable by the suffix.


Well some laptops come with an option of an 'm3', 'i5' or 'i7'. On the internet you'll see lots of people saying the the i5 will crush the m3, however all of those three processors are identical with only a 200 MHz clock increase between them (15-20% performance increase in practice).

I guess the OP is referring to the confusion between the 'laptop' m3 and the i7 that you can get in a desktop or laptop.


Yeah, rebranding is maybe the worst. Recent example are the new Ryzen 5xxx series APUs, where 5x00U can be either Zen3 or rebranded Zen2


Naming is hard, it's even harder when marketing is involved.

Does Intel seriously need all these different SKUs? Wouldn't there be an argument to simplifying their product line up.


I would have said you’re wrong Even as late as 2015.

I don’t know much about regulations. What stopped Intel from dropping prices the moment Ryzen was available in the market? It isn’t like Intel hardware wasn’t good enough. It was just way massively overpriced. Four ish years in we are starting to see sales in the latest generation Intel processors.

What prevented them from doing this sooner?

I would have trusted Intel to know what they were doing in 2015 with market segmentation. Not anymore.

https://www.joelonsoftware.com/2004/12/15/camels-and-rubber-...


The more interesting thing would be how many actually different rectangles of silicon Intel produce. I'm going to guess annually it's not very many (perhaps more if you include steppings that make incremental revisions and bug fixes to the same silicon). The different models are about binning and disabling features for market segmentation.


> The different models are about binning and disabling features for market segmentation

Other than from a purely sales driven perspective, wouldn't it make more sense to simply allow the customers, OEMs, motherboard manufacturer and so on, simply control the features from the BIOS/EFI?


Not from the point of view of Intel's control and profitability!

Fun fact: It's known that some very large customers like Google ask Intel to implement particular special features. These features are present in the chips us regular folk can buy, but fused off (and shrouded in some secrecy of course).


Apple believes simplicity, but where did it lead them? iPhone 4,5,6,7,8 was good. But now we have have iPhone 12 Pro Max Pink


And I have the 'SE 2', which they just refer to as 'SE' in their marketing despite the fact they already had a different product with the same name.


And is also referred to as “SE 2020” in both their writing and in general

They probably could have gone with 9...

An extension of the 6,7,8 size/form factor but on newer chipsets

And they skipped 9 anyway


They should just go with model years


Whenever I think about upgrading my system these days, I start looking at processors and graphics cards and find it just so hard to understand the differences I end up giving up. (Plus I've got no spare cash, but, you know...)


This looks horribly complicated compared to the naming schemes deemed sufficient by AMD, NVIDIA, Apple, etc.


AMD is no saint here. Knowing whether a CPU is Zen 1, 1+, 2, or 3 from its model number basically requires memorization.

Zen: 1200, 2300U, 2400G, 3200U

Zen+: 1200AF, 2600E, 3200G, 3300U

Zen2: 3100, 4300G, 4300U, 5500U

Zen3: 5600X, 5400U

Nvidia neither. For 20-series had 2060, 2060 Super, 2060 Ti, and 2060 KO. Which of those is “fastest”? I can’t remember.

Apple neither. The iPad’s naming succession went something like: iPad 2, iPad Air, iPad Air 2, iPad (5th gen), iPad (6th gen) + iPad Air (3rd gen).

Microsoft neither. Xbox, Xbox 360, Xbox One (!?), Xbox One S, Xbox One X, Xbox Series S, Xbox Series X. (Seriously MS, wtf.)

Really the only sanity I see is Sony PlayStation: PS, PS2, PS3, PS4, PS5. They even re-released the PS as PS one after the PS2 was released.


I was about to reply that you needed to mention the Xbox, but you edited just in time!

Still, at any moment in time, it can be hard to decide on what product you want.

With PS2 you got PS1 backwards compatibility. With PS3, you got PS1 and PS1... if you got the original 60GB. But then the 40/80GB were released without it.

With PS4, they updated with PS4 Pro. And now there's PS5 and Digital Edition. This isn't as bad as I believe the specs are the same and the disk drive is the only difference.

But now Xbox Series S and X actually have a big difference in performance in addition to the disk drive! As someone that wants top performance but doesn't want the drive... ugh. Maybe the drives should be accessories... but consoles are too much like PCs already!


Indeed Sony can get a bit muddied within generations, especially with the PS3 as they had to cut cost a lot from 1st gen. At least “PS4” vs. “PS4 Pro” feels clear which is “better” vs. MS’s S vs. X (one is a sedan and the other an SUV, right??) or Intel’s smattering of suffixes.


Up until a few years ago Intel's naming scheme was actually really simple.

iX-YZZZ => consumer SKU. X = tier. Y = generation. ZZZ = higher is better. Ends with M? Mobile.

EX-ZZZZ vY => server/workstation SKU. X = 3 entry-level, same socket as consumers. X = 5 main-stream SKUs for dual-socket servers. X = 7 high-end SKUs for quad/octo processor systems. ZZZZ = "higher is better". Y = generation.


Not sure when “a few years ago” was, but in 2013 (“4th gen”) Intel had more far than just “M” as a suffix for their consumer CPUs. They had at least EC, EQ, HQ, K, M, MQ, R, S, T, TE, U, and Y. Various generations between now and then have a variety of different suffixes, but never just “M” so far as I know.


It used to be M -> 45W, U -> 15W, Y <15W for mobile CPUs, but Y CPUs were awful and only on the most budget devices. At some point M also sprouted H and HQ, of which M and HQ eventually got dropped.

K was for overclocked/unlocked CPUs, T was for TDP limited CPUs. For "average consumers", both were pretty irrelevant.

Never seen EC, EQ, MQ, R, S or TE cpus,

Still, 4th-9th gen were pretty simple, you needed to know K (unlocked) and F (no integrated graphics) for desktop CPUs if you were buying the CPU directly, if it just came in your PC an OEM built it didn't make a difference.

Laptops you also needed two: M (later H) for faster, less battery life, U for slower, more battery life. If you were going for a netbook, maybe Y also, but there certainly wasn't any price range to contain both Y and M/H CPUs.


Another commenter upward [1] linked an archive of an older nomenclature (though not from the same time I was checking—I was just looking at Intel Ark) which details the more exotic suffixes.

[1] https://news.ycombinator.com/item?id=26024579


This is pretty similar to AMD though?

   Intel Core  i7     10  65  G7
   AMD Ryzen   9       5  900 X


In my opinion, the only alternative to an encoding naming scheme for processors is a really verbose one.

The example, Intel Core i7 1065G7, would become High Performance 10th Generation 4-core 1.5-3.9Ghz with Iris Plus Graphics. That's a mouthful and only slightly better if you're trying to help someone know what to look for. Maybe add in 8-thread and/or SMT.

Then instead of H you might get Mobile Performance, and U would be Efficient Mobile.

The H and U coding is useful on both Ryzen and Core lines if people ask for help finding a workstation/gaming laptop or a laptop that prioritizes battery life.

The HQ was useful for a while when I wanted real quad-core laptops when many varieties of the Core were dual-core.

The central numbering system is quite messy, especially given the differences in 4, 6, 8 and 10 core chips all having 10 and 11 prefixes. You cannot even guess core count, base clock or turbo clock. Is a 1065 better than a 10750? Why is one four digits and the other five?

EDIT: Thought I'd continue my thoughts here.

I started upgrading/building computers when I had a CyrixInstead 133Mhz and replaced it with a Pentium MMX 200Mhz. New builds with Pentium II and III were also straight forward. Then my next chip was... an Athlon XP 1700+. The Mhz wars were supposed to be over at this point, and you just had a brand like "Athlon XP" and then some kind of performance number. That was it.

Now, CPUs are being made in more variety than ever. Desktop (office, gaming, workstation, server, HPC, etc.), notebook (office, gaming, workstation, convertible, ultra-portable, etc.). It's in Intel and AMD's interest to make just the right chip for each scenario as CPUs are much less "one size fits all" than ever. The i3/i5/i7/i9 designation is less useful now that it is so separated from underlying performance (for example, at one point i3 was dual core, i5 was quad, and i7 was quad with SMT enabled) and as underlying architectures can shift performance (sometimes an i5 is better than a previous generation i7, but other times it is not). Perhaps chips really need names that better reflect their use case, and then model numbers within each product line.

Core Ultrabook 520. Ryzen Gaming Notebook 460. Pentium Office Desktop 250.

At least then you'd know to look for a specific line of CPUs and then could likely compare the numbers which could be better adjusted to relative performance (regardless of architecture and clock speed.)


What a mess. The new Xeon naming is even harder to decipher.


I used to think my i5-780M CPU was a 7th generation CPU, and thus should be very fast


I lot of people go even further and think because it's labeled "i7" it must be even faster!

At one of my previous employers they developed and sold (among a lot of other things) industrial PC hardware. One of the top running models (the one with the highest performance) had some mainstream Celeron dualcore in it at the time. One of the key customers (German automotive brand) decided, without any profiling or benchmarking whatsoever, that the performance was not good enough for their application and an "i7" was needed. Sure enough, a system with an i7 (first generation, lowest tier mobile class) was developed and the customer happily bought the hardware with less than half the performance for more than double the price. The customer didn't even have a problem with the moving mechanical parts (two fans) that were introduced in the process ... (Usually that's a KO criteria in industrial applications)


Yes, when i realized how slow that CPU was, I decided that I needed a fast "i7", so I bought a laptop with an i7-4600U as replacement


Completely obscure system with a ton of meaningless numbers and letters.


Which of them did you find has no meaning?


The nomenclature of CPU models can be very opaque, that's why I refer to https://cpu.userbenchmark.com/ to compare different CPUs. There can be some surprises performance wise.


Don't refer to userbenchmark. Their ratings are so blatantly manipulated that they've gotten themselves banned from virtually every forum or subreddit for hardware advice. You know something's up when both r/Intel and r/AMD ban a "benchmarking" site.


What's an alternative for when I want ballpark performance numbers, where I'm not making a purchasing decision and I just want to know, "compared to what I have, is this other thing a {little,lot}{slower,faster} or about the same? Where "about the same" is within ~20% and the little/lot line is around 100% (double)?


Here is an alternative. None of them are perfect but for spur of the moment comparisons, a consistent "general" CPU benchmark should probably work well enough.

If it matters, i.e. purchase/usage decision, you really should dig into specific benchmarks that reflect your use (e.g. the actual application or game).

https://www.cpubenchmark.net/laptop.html


AnandTech Bench is a decent resource. https://www.anandtech.com/bench/


That looks great for a detailed comparison, but is quite a bit too much information for when I just want a quick "I haven't looked up that cpu before, what ballpark performance is it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: